00:00:00.000 Started by upstream project "autotest-per-patch" build number 132308 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.140 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.141 The recommended git tool is: git 00:00:00.141 using credential 00000000-0000-0000-0000-000000000002 00:00:00.143 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.769 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.783 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.794 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.794 > git config core.sparsecheckout # timeout=10 00:00:04.806 > git read-tree -mu HEAD # timeout=10 00:00:04.823 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.839 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.840 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.923 [Pipeline] Start of Pipeline 00:00:04.938 [Pipeline] library 00:00:04.940 Loading library shm_lib@master 00:00:04.940 Library shm_lib@master is cached. Copying from home. 00:00:04.954 [Pipeline] node 00:00:04.965 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.967 [Pipeline] { 00:00:04.973 [Pipeline] catchError 00:00:04.974 [Pipeline] { 00:00:04.982 [Pipeline] wrap 00:00:04.988 [Pipeline] { 00:00:04.993 [Pipeline] stage 00:00:04.995 [Pipeline] { (Prologue) 00:00:05.007 [Pipeline] echo 00:00:05.008 Node: VM-host-SM16 00:00:05.012 [Pipeline] cleanWs 00:00:05.020 [WS-CLEANUP] Deleting project workspace... 00:00:05.020 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.026 [WS-CLEANUP] done 00:00:05.207 [Pipeline] setCustomBuildProperty 00:00:05.294 [Pipeline] httpRequest 00:00:05.613 [Pipeline] echo 00:00:05.615 Sorcerer 10.211.164.20 is alive 00:00:05.624 [Pipeline] retry 00:00:05.626 [Pipeline] { 00:00:05.640 [Pipeline] httpRequest 00:00:05.644 HttpMethod: GET 00:00:05.645 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.645 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.647 Response Code: HTTP/1.1 200 OK 00:00:05.647 Success: Status code 200 is in the accepted range: 200,404 00:00:05.648 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.588 [Pipeline] } 00:00:06.604 [Pipeline] // retry 00:00:06.612 [Pipeline] sh 00:00:06.895 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.910 [Pipeline] httpRequest 00:00:07.987 [Pipeline] echo 00:00:07.989 Sorcerer 10.211.164.20 is alive 00:00:07.998 [Pipeline] retry 00:00:08.000 [Pipeline] { 00:00:08.016 [Pipeline] httpRequest 00:00:08.022 HttpMethod: GET 00:00:08.023 URL: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:08.023 Sending request to url: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:08.024 Response Code: HTTP/1.1 200 OK 00:00:08.025 Success: Status code 200 is in the accepted range: 200,404 00:00:08.025 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:27.465 [Pipeline] } 00:00:27.487 [Pipeline] // retry 00:00:27.496 [Pipeline] sh 00:00:27.789 + tar --no-same-owner -xf spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:31.086 [Pipeline] sh 00:00:31.400 + git -C spdk log --oneline -n5 00:00:31.400 ca87521f7 test/nvme/interrupt: Verify pre|post IO cpu load 00:00:31.400 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:31.400 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:31.400 4bcab9fb9 correct kick for CQ full case 00:00:31.400 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:31.418 [Pipeline] writeFile 00:00:31.432 [Pipeline] sh 00:00:31.713 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.726 [Pipeline] sh 00:00:32.007 + cat autorun-spdk.conf 00:00:32.007 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.007 SPDK_TEST_NVMF=1 00:00:32.007 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.007 SPDK_TEST_URING=1 00:00:32.007 SPDK_TEST_USDT=1 00:00:32.007 SPDK_RUN_UBSAN=1 00:00:32.007 NET_TYPE=virt 00:00:32.007 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.015 RUN_NIGHTLY=0 00:00:32.017 [Pipeline] } 00:00:32.032 [Pipeline] // stage 00:00:32.049 [Pipeline] stage 00:00:32.051 [Pipeline] { (Run VM) 00:00:32.068 [Pipeline] sh 00:00:32.351 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.352 + echo 'Start stage prepare_nvme.sh' 00:00:32.352 Start stage prepare_nvme.sh 00:00:32.352 + [[ -n 5 ]] 00:00:32.352 + disk_prefix=ex5 00:00:32.352 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:32.352 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:32.352 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:32.352 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.352 ++ SPDK_TEST_NVMF=1 00:00:32.352 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.352 ++ SPDK_TEST_URING=1 00:00:32.352 ++ SPDK_TEST_USDT=1 00:00:32.352 ++ SPDK_RUN_UBSAN=1 00:00:32.352 ++ NET_TYPE=virt 00:00:32.352 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.352 ++ RUN_NIGHTLY=0 00:00:32.352 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:32.352 + nvme_files=() 00:00:32.352 + declare -A nvme_files 00:00:32.352 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.352 + nvme_files['nvme.img']=5G 00:00:32.352 + nvme_files['nvme-cmb.img']=5G 00:00:32.352 + nvme_files['nvme-multi0.img']=4G 00:00:32.352 + nvme_files['nvme-multi1.img']=4G 00:00:32.352 + nvme_files['nvme-multi2.img']=4G 00:00:32.352 + nvme_files['nvme-openstack.img']=8G 00:00:32.352 + nvme_files['nvme-zns.img']=5G 00:00:32.352 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.352 + (( SPDK_TEST_FTL == 1 )) 00:00:32.352 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.352 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.352 + for nvme in "${!nvme_files[@]}" 00:00:32.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:32.352 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.352 + for nvme in "${!nvme_files[@]}" 00:00:32.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:32.352 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.352 + for nvme in "${!nvme_files[@]}" 00:00:32.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:32.610 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.610 + for nvme in "${!nvme_files[@]}" 00:00:32.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:32.610 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.610 + for nvme in "${!nvme_files[@]}" 00:00:32.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:32.610 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.610 + for nvme in "${!nvme_files[@]}" 00:00:32.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:32.610 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.610 + for nvme in "${!nvme_files[@]}" 00:00:32.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:32.868 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.868 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:32.868 + echo 'End stage prepare_nvme.sh' 00:00:32.868 End stage prepare_nvme.sh 00:00:32.880 [Pipeline] sh 00:00:33.162 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.162 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:33.162 00:00:33.162 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:33.162 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:33.162 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:33.162 HELP=0 00:00:33.162 DRY_RUN=0 00:00:33.162 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:33.162 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.162 NVME_AUTO_CREATE=0 00:00:33.162 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:33.162 NVME_CMB=,, 00:00:33.162 NVME_PMR=,, 00:00:33.162 NVME_ZNS=,, 00:00:33.162 NVME_MS=,, 00:00:33.162 NVME_FDP=,, 00:00:33.162 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.162 SPDK_VAGRANT_VMCPU=10 00:00:33.162 SPDK_VAGRANT_VMRAM=12288 00:00:33.162 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.162 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.162 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.162 SPDK_OPENSTACK_NETWORK=0 00:00:33.162 VAGRANT_PACKAGE_BOX=0 00:00:33.162 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.162 FORCE_DISTRO=true 00:00:33.162 VAGRANT_BOX_VERSION= 00:00:33.162 EXTRA_VAGRANTFILES= 00:00:33.162 NIC_MODEL=e1000 00:00:33.162 00:00:33.162 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:33.162 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:36.449 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.449 ==> default: Creating image (snapshot of base box volume). 00:00:36.708 ==> default: Creating domain with the following settings... 00:00:36.708 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731849025_55eb5718710ec2d392d6 00:00:36.708 ==> default: -- Domain type: kvm 00:00:36.708 ==> default: -- Cpus: 10 00:00:36.708 ==> default: -- Feature: acpi 00:00:36.708 ==> default: -- Feature: apic 00:00:36.708 ==> default: -- Feature: pae 00:00:36.708 ==> default: -- Memory: 12288M 00:00:36.708 ==> default: -- Memory Backing: hugepages: 00:00:36.708 ==> default: -- Management MAC: 00:00:36.708 ==> default: -- Loader: 00:00:36.708 ==> default: -- Nvram: 00:00:36.708 ==> default: -- Base box: spdk/fedora39 00:00:36.708 ==> default: -- Storage pool: default 00:00:36.708 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731849025_55eb5718710ec2d392d6.img (20G) 00:00:36.708 ==> default: -- Volume Cache: default 00:00:36.708 ==> default: -- Kernel: 00:00:36.708 ==> default: -- Initrd: 00:00:36.708 ==> default: -- Graphics Type: vnc 00:00:36.708 ==> default: -- Graphics Port: -1 00:00:36.708 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.708 ==> default: -- Graphics Password: Not defined 00:00:36.708 ==> default: -- Video Type: cirrus 00:00:36.708 ==> default: -- Video VRAM: 9216 00:00:36.708 ==> default: -- Sound Type: 00:00:36.708 ==> default: -- Keymap: en-us 00:00:36.708 ==> default: -- TPM Path: 00:00:36.708 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.708 ==> default: -- Command line args: 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.708 ==> default: -> value=-drive, 00:00:36.708 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.708 ==> default: -> value=-drive, 00:00:36.708 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.708 ==> default: -> value=-drive, 00:00:36.708 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.708 ==> default: -> value=-drive, 00:00:36.708 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.708 ==> default: -> value=-device, 00:00:36.708 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.708 ==> default: Creating shared folders metadata... 00:00:36.708 ==> default: Starting domain. 00:00:38.087 ==> default: Waiting for domain to get an IP address... 00:00:56.184 ==> default: Waiting for SSH to become available... 00:00:56.184 ==> default: Configuring and enabling network interfaces... 00:00:59.472 default: SSH address: 192.168.121.172:22 00:00:59.472 default: SSH username: vagrant 00:00:59.472 default: SSH auth method: private key 00:01:01.376 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.490 ==> default: Mounting SSHFS shared folder... 00:01:10.425 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.425 ==> default: Checking Mount.. 00:01:11.800 ==> default: Folder Successfully Mounted! 00:01:11.800 ==> default: Running provisioner: file... 00:01:12.367 default: ~/.gitconfig => .gitconfig 00:01:12.939 00:01:12.939 SUCCESS! 00:01:12.939 00:01:12.939 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:12.939 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.939 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:12.939 00:01:12.954 [Pipeline] } 00:01:12.971 [Pipeline] // stage 00:01:12.979 [Pipeline] dir 00:01:12.979 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:12.984 [Pipeline] { 00:01:12.997 [Pipeline] catchError 00:01:12.998 [Pipeline] { 00:01:13.020 [Pipeline] sh 00:01:13.302 + vagrant ssh-config --host vagrant 00:01:13.302 + sed -ne /^Host/,$p 00:01:13.302 + tee ssh_conf 00:01:17.489 Host vagrant 00:01:17.489 HostName 192.168.121.172 00:01:17.489 User vagrant 00:01:17.489 Port 22 00:01:17.489 UserKnownHostsFile /dev/null 00:01:17.489 StrictHostKeyChecking no 00:01:17.489 PasswordAuthentication no 00:01:17.489 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.489 IdentitiesOnly yes 00:01:17.489 LogLevel FATAL 00:01:17.489 ForwardAgent yes 00:01:17.489 ForwardX11 yes 00:01:17.489 00:01:17.501 [Pipeline] withEnv 00:01:17.503 [Pipeline] { 00:01:17.516 [Pipeline] sh 00:01:17.793 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.793 source /etc/os-release 00:01:17.793 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.793 # Minimal, systemd-like check. 00:01:17.793 if [[ -e /.dockerenv ]]; then 00:01:17.793 # Clear garbage from the node's name: 00:01:17.793 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.793 # $HOSTNAME is the actual container id 00:01:17.793 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.793 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.793 # We can assume this is a mount from a host where container is running, 00:01:17.793 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.793 container="$(< /etc/hostname) ($agent)" 00:01:17.793 else 00:01:17.793 # Fallback 00:01:17.793 container=$agent 00:01:17.793 fi 00:01:17.793 fi 00:01:17.793 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.793 00:01:18.062 [Pipeline] } 00:01:18.075 [Pipeline] // withEnv 00:01:18.083 [Pipeline] setCustomBuildProperty 00:01:18.098 [Pipeline] stage 00:01:18.100 [Pipeline] { (Tests) 00:01:18.116 [Pipeline] sh 00:01:18.392 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.403 [Pipeline] sh 00:01:18.679 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.693 [Pipeline] timeout 00:01:18.694 Timeout set to expire in 1 hr 0 min 00:01:18.696 [Pipeline] { 00:01:18.714 [Pipeline] sh 00:01:18.991 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.557 HEAD is now at ca87521f7 test/nvme/interrupt: Verify pre|post IO cpu load 00:01:19.568 [Pipeline] sh 00:01:19.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.125 [Pipeline] sh 00:01:20.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.424 [Pipeline] sh 00:01:20.705 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:20.705 ++ readlink -f spdk_repo 00:01:20.964 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:20.964 + [[ -n /home/vagrant/spdk_repo ]] 00:01:20.964 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:20.964 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:20.964 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:20.964 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:20.964 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:20.964 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:20.964 + cd /home/vagrant/spdk_repo 00:01:20.964 + source /etc/os-release 00:01:20.964 ++ NAME='Fedora Linux' 00:01:20.964 ++ VERSION='39 (Cloud Edition)' 00:01:20.964 ++ ID=fedora 00:01:20.964 ++ VERSION_ID=39 00:01:20.964 ++ VERSION_CODENAME= 00:01:20.964 ++ PLATFORM_ID=platform:f39 00:01:20.964 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.964 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.964 ++ LOGO=fedora-logo-icon 00:01:20.964 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.964 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.964 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.964 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.964 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.964 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.964 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.964 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.964 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.964 ++ SUPPORT_END=2024-11-12 00:01:20.964 ++ VARIANT='Cloud Edition' 00:01:20.964 ++ VARIANT_ID=cloud 00:01:20.964 + uname -a 00:01:20.964 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.964 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:21.222 Hugepages 00:01:21.222 node hugesize free / total 00:01:21.480 node0 1048576kB 0 / 0 00:01:21.480 node0 2048kB 0 / 0 00:01:21.480 00:01:21.480 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.480 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:21.480 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:21.480 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:21.480 + rm -f /tmp/spdk-ld-path 00:01:21.480 + source autorun-spdk.conf 00:01:21.480 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.480 ++ SPDK_TEST_NVMF=1 00:01:21.480 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.480 ++ SPDK_TEST_URING=1 00:01:21.480 ++ SPDK_TEST_USDT=1 00:01:21.480 ++ SPDK_RUN_UBSAN=1 00:01:21.480 ++ NET_TYPE=virt 00:01:21.480 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.480 ++ RUN_NIGHTLY=0 00:01:21.480 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.480 + [[ -n '' ]] 00:01:21.480 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.480 + for M in /var/spdk/build-*-manifest.txt 00:01:21.480 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.480 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.480 + for M in /var/spdk/build-*-manifest.txt 00:01:21.480 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.480 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.480 + for M in /var/spdk/build-*-manifest.txt 00:01:21.480 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.480 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.480 ++ uname 00:01:21.480 + [[ Linux == \L\i\n\u\x ]] 00:01:21.480 + sudo dmesg -T 00:01:21.480 + sudo dmesg --clear 00:01:21.480 + dmesg_pid=5367 00:01:21.480 + sudo dmesg -Tw 00:01:21.480 + [[ Fedora Linux == FreeBSD ]] 00:01:21.480 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.480 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.480 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.480 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.481 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.481 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.481 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.481 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.481 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.481 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.481 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.481 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.481 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.481 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.481 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.739 13:11:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.739 13:11:10 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.739 13:11:10 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:21.739 13:11:10 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.739 13:11:10 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.739 13:11:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.739 13:11:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:21.739 13:11:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.739 13:11:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.739 13:11:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.739 13:11:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.739 13:11:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.739 13:11:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.739 13:11:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.739 13:11:10 -- paths/export.sh@5 -- $ export PATH 00:01:21.739 13:11:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.739 13:11:10 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:21.739 13:11:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:21.739 13:11:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731849070.XXXXXX 00:01:21.739 13:11:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731849070.ZWoxIr 00:01:21.739 13:11:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:21.739 13:11:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:21.739 13:11:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:21.739 13:11:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:21.739 13:11:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.739 13:11:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:21.739 13:11:10 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.739 13:11:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.739 13:11:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:21.739 13:11:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:21.739 13:11:10 -- pm/common@17 -- $ local monitor 00:01:21.739 13:11:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.739 13:11:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.739 13:11:10 -- pm/common@25 -- $ sleep 1 00:01:21.739 13:11:10 -- pm/common@21 -- $ date +%s 00:01:21.739 13:11:10 -- pm/common@21 -- $ date +%s 00:01:21.739 13:11:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731849070 00:01:21.740 13:11:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731849070 00:01:21.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731849070_collect-cpu-load.pm.log 00:01:21.740 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731849070_collect-vmstat.pm.log 00:01:22.675 13:11:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:22.675 13:11:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.675 13:11:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.675 13:11:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:22.675 13:11:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.675 Sun Nov 17 01:11:11 PM UTC 2024 00:01:22.675 13:11:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.675 v25.01-pre-190-gca87521f7 00:01:22.675 13:11:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.675 13:11:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.675 13:11:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.675 13:11:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.675 13:11:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.675 13:11:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.675 ************************************ 00:01:22.675 START TEST ubsan 00:01:22.675 ************************************ 00:01:22.675 using ubsan 00:01:22.675 13:11:11 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:22.675 00:01:22.675 real 0m0.000s 00:01:22.675 user 0m0.000s 00:01:22.675 sys 0m0.000s 00:01:22.675 13:11:11 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.675 ************************************ 00:01:22.675 END TEST ubsan 00:01:22.675 ************************************ 00:01:22.675 13:11:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.934 13:11:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.934 13:11:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.934 13:11:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.934 13:11:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:22.934 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:22.934 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:23.500 Using 'verbs' RDMA provider 00:01:36.660 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:51.545 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:51.545 Creating mk/config.mk...done. 00:01:51.545 Creating mk/cc.flags.mk...done. 00:01:51.545 Type 'make' to build. 00:01:51.545 13:11:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:51.545 13:11:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:51.545 13:11:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:51.545 13:11:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.545 ************************************ 00:01:51.545 START TEST make 00:01:51.545 ************************************ 00:01:51.545 13:11:39 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:51.545 make[1]: Nothing to be done for 'all'. 00:02:03.753 The Meson build system 00:02:03.753 Version: 1.5.0 00:02:03.753 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:03.753 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:03.753 Build type: native build 00:02:03.754 Program cat found: YES (/usr/bin/cat) 00:02:03.754 Project name: DPDK 00:02:03.754 Project version: 24.03.0 00:02:03.754 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.754 C linker for the host machine: cc ld.bfd 2.40-14 00:02:03.754 Host machine cpu family: x86_64 00:02:03.754 Host machine cpu: x86_64 00:02:03.754 Message: ## Building in Developer Mode ## 00:02:03.754 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.754 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.754 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.754 Program python3 found: YES (/usr/bin/python3) 00:02:03.754 Program cat found: YES (/usr/bin/cat) 00:02:03.754 Compiler for C supports arguments -march=native: YES 00:02:03.754 Checking for size of "void *" : 8 00:02:03.754 Checking for size of "void *" : 8 (cached) 00:02:03.754 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:03.754 Library m found: YES 00:02:03.754 Library numa found: YES 00:02:03.754 Has header "numaif.h" : YES 00:02:03.754 Library fdt found: NO 00:02:03.754 Library execinfo found: NO 00:02:03.754 Has header "execinfo.h" : YES 00:02:03.754 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.754 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.754 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.754 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.754 Run-time dependency openssl found: YES 3.1.1 00:02:03.754 Run-time dependency libpcap found: YES 1.10.4 00:02:03.754 Has header "pcap.h" with dependency libpcap: YES 00:02:03.754 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.754 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.754 Compiler for C supports arguments -Wformat: YES 00:02:03.754 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.754 Compiler for C supports arguments -Wformat-security: NO 00:02:03.754 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.754 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.754 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.754 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.754 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.754 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.754 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.754 Compiler for C supports arguments -Wundef: YES 00:02:03.754 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.754 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.754 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.754 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.754 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.754 Program objdump found: YES (/usr/bin/objdump) 00:02:03.754 Compiler for C supports arguments -mavx512f: YES 00:02:03.754 Checking if "AVX512 checking" compiles: YES 00:02:03.754 Fetching value of define "__SSE4_2__" : 1 00:02:03.754 Fetching value of define "__AES__" : 1 00:02:03.754 Fetching value of define "__AVX__" : 1 00:02:03.754 Fetching value of define "__AVX2__" : 1 00:02:03.754 Fetching value of define "__AVX512BW__" : (undefined) 00:02:03.754 Fetching value of define "__AVX512CD__" : (undefined) 00:02:03.754 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:03.754 Fetching value of define "__AVX512F__" : (undefined) 00:02:03.754 Fetching value of define "__AVX512VL__" : (undefined) 00:02:03.754 Fetching value of define "__PCLMUL__" : 1 00:02:03.754 Fetching value of define "__RDRND__" : 1 00:02:03.754 Fetching value of define "__RDSEED__" : 1 00:02:03.754 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.754 Fetching value of define "__znver1__" : (undefined) 00:02:03.754 Fetching value of define "__znver2__" : (undefined) 00:02:03.754 Fetching value of define "__znver3__" : (undefined) 00:02:03.754 Fetching value of define "__znver4__" : (undefined) 00:02:03.754 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.754 Message: lib/log: Defining dependency "log" 00:02:03.754 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.754 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.754 Checking for function "getentropy" : NO 00:02:03.754 Message: lib/eal: Defining dependency "eal" 00:02:03.754 Message: lib/ring: Defining dependency "ring" 00:02:03.754 Message: lib/rcu: Defining dependency "rcu" 00:02:03.754 Message: lib/mempool: Defining dependency "mempool" 00:02:03.754 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.754 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.754 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:03.754 Compiler for C supports arguments -mpclmul: YES 00:02:03.754 Compiler for C supports arguments -maes: YES 00:02:03.754 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.754 Compiler for C supports arguments -mavx512bw: YES 00:02:03.754 Compiler for C supports arguments -mavx512dq: YES 00:02:03.754 Compiler for C supports arguments -mavx512vl: YES 00:02:03.754 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.754 Compiler for C supports arguments -mavx2: YES 00:02:03.754 Compiler for C supports arguments -mavx: YES 00:02:03.754 Message: lib/net: Defining dependency "net" 00:02:03.754 Message: lib/meter: Defining dependency "meter" 00:02:03.754 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.754 Message: lib/pci: Defining dependency "pci" 00:02:03.754 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.754 Message: lib/hash: Defining dependency "hash" 00:02:03.754 Message: lib/timer: Defining dependency "timer" 00:02:03.754 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.754 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.754 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.754 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.754 Message: lib/power: Defining dependency "power" 00:02:03.754 Message: lib/reorder: Defining dependency "reorder" 00:02:03.754 Message: lib/security: Defining dependency "security" 00:02:03.754 Has header "linux/userfaultfd.h" : YES 00:02:03.754 Has header "linux/vduse.h" : YES 00:02:03.754 Message: lib/vhost: Defining dependency "vhost" 00:02:03.754 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.754 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.754 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.754 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.754 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.754 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.754 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.754 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.754 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.754 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.754 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.754 Configuring doxy-api-html.conf using configuration 00:02:03.754 Configuring doxy-api-man.conf using configuration 00:02:03.754 Program mandb found: YES (/usr/bin/mandb) 00:02:03.754 Program sphinx-build found: NO 00:02:03.754 Configuring rte_build_config.h using configuration 00:02:03.754 Message: 00:02:03.754 ================= 00:02:03.754 Applications Enabled 00:02:03.754 ================= 00:02:03.754 00:02:03.754 apps: 00:02:03.754 00:02:03.754 00:02:03.754 Message: 00:02:03.754 ================= 00:02:03.754 Libraries Enabled 00:02:03.754 ================= 00:02:03.754 00:02:03.754 libs: 00:02:03.754 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.754 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.754 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.754 00:02:03.754 Message: 00:02:03.754 =============== 00:02:03.754 Drivers Enabled 00:02:03.754 =============== 00:02:03.754 00:02:03.754 common: 00:02:03.754 00:02:03.754 bus: 00:02:03.754 pci, vdev, 00:02:03.754 mempool: 00:02:03.754 ring, 00:02:03.754 dma: 00:02:03.754 00:02:03.754 net: 00:02:03.754 00:02:03.754 crypto: 00:02:03.754 00:02:03.754 compress: 00:02:03.754 00:02:03.754 vdpa: 00:02:03.754 00:02:03.754 00:02:03.754 Message: 00:02:03.754 ================= 00:02:03.754 Content Skipped 00:02:03.754 ================= 00:02:03.754 00:02:03.754 apps: 00:02:03.754 dumpcap: explicitly disabled via build config 00:02:03.754 graph: explicitly disabled via build config 00:02:03.754 pdump: explicitly disabled via build config 00:02:03.754 proc-info: explicitly disabled via build config 00:02:03.754 test-acl: explicitly disabled via build config 00:02:03.754 test-bbdev: explicitly disabled via build config 00:02:03.754 test-cmdline: explicitly disabled via build config 00:02:03.754 test-compress-perf: explicitly disabled via build config 00:02:03.754 test-crypto-perf: explicitly disabled via build config 00:02:03.754 test-dma-perf: explicitly disabled via build config 00:02:03.754 test-eventdev: explicitly disabled via build config 00:02:03.754 test-fib: explicitly disabled via build config 00:02:03.754 test-flow-perf: explicitly disabled via build config 00:02:03.754 test-gpudev: explicitly disabled via build config 00:02:03.754 test-mldev: explicitly disabled via build config 00:02:03.754 test-pipeline: explicitly disabled via build config 00:02:03.754 test-pmd: explicitly disabled via build config 00:02:03.754 test-regex: explicitly disabled via build config 00:02:03.754 test-sad: explicitly disabled via build config 00:02:03.754 test-security-perf: explicitly disabled via build config 00:02:03.754 00:02:03.754 libs: 00:02:03.754 argparse: explicitly disabled via build config 00:02:03.754 metrics: explicitly disabled via build config 00:02:03.754 acl: explicitly disabled via build config 00:02:03.754 bbdev: explicitly disabled via build config 00:02:03.754 bitratestats: explicitly disabled via build config 00:02:03.755 bpf: explicitly disabled via build config 00:02:03.755 cfgfile: explicitly disabled via build config 00:02:03.755 distributor: explicitly disabled via build config 00:02:03.755 efd: explicitly disabled via build config 00:02:03.755 eventdev: explicitly disabled via build config 00:02:03.755 dispatcher: explicitly disabled via build config 00:02:03.755 gpudev: explicitly disabled via build config 00:02:03.755 gro: explicitly disabled via build config 00:02:03.755 gso: explicitly disabled via build config 00:02:03.755 ip_frag: explicitly disabled via build config 00:02:03.755 jobstats: explicitly disabled via build config 00:02:03.755 latencystats: explicitly disabled via build config 00:02:03.755 lpm: explicitly disabled via build config 00:02:03.755 member: explicitly disabled via build config 00:02:03.755 pcapng: explicitly disabled via build config 00:02:03.755 rawdev: explicitly disabled via build config 00:02:03.755 regexdev: explicitly disabled via build config 00:02:03.755 mldev: explicitly disabled via build config 00:02:03.755 rib: explicitly disabled via build config 00:02:03.755 sched: explicitly disabled via build config 00:02:03.755 stack: explicitly disabled via build config 00:02:03.755 ipsec: explicitly disabled via build config 00:02:03.755 pdcp: explicitly disabled via build config 00:02:03.755 fib: explicitly disabled via build config 00:02:03.755 port: explicitly disabled via build config 00:02:03.755 pdump: explicitly disabled via build config 00:02:03.755 table: explicitly disabled via build config 00:02:03.755 pipeline: explicitly disabled via build config 00:02:03.755 graph: explicitly disabled via build config 00:02:03.755 node: explicitly disabled via build config 00:02:03.755 00:02:03.755 drivers: 00:02:03.755 common/cpt: not in enabled drivers build config 00:02:03.755 common/dpaax: not in enabled drivers build config 00:02:03.755 common/iavf: not in enabled drivers build config 00:02:03.755 common/idpf: not in enabled drivers build config 00:02:03.755 common/ionic: not in enabled drivers build config 00:02:03.755 common/mvep: not in enabled drivers build config 00:02:03.755 common/octeontx: not in enabled drivers build config 00:02:03.755 bus/auxiliary: not in enabled drivers build config 00:02:03.755 bus/cdx: not in enabled drivers build config 00:02:03.755 bus/dpaa: not in enabled drivers build config 00:02:03.755 bus/fslmc: not in enabled drivers build config 00:02:03.755 bus/ifpga: not in enabled drivers build config 00:02:03.755 bus/platform: not in enabled drivers build config 00:02:03.755 bus/uacce: not in enabled drivers build config 00:02:03.755 bus/vmbus: not in enabled drivers build config 00:02:03.755 common/cnxk: not in enabled drivers build config 00:02:03.755 common/mlx5: not in enabled drivers build config 00:02:03.755 common/nfp: not in enabled drivers build config 00:02:03.755 common/nitrox: not in enabled drivers build config 00:02:03.755 common/qat: not in enabled drivers build config 00:02:03.755 common/sfc_efx: not in enabled drivers build config 00:02:03.755 mempool/bucket: not in enabled drivers build config 00:02:03.755 mempool/cnxk: not in enabled drivers build config 00:02:03.755 mempool/dpaa: not in enabled drivers build config 00:02:03.755 mempool/dpaa2: not in enabled drivers build config 00:02:03.755 mempool/octeontx: not in enabled drivers build config 00:02:03.755 mempool/stack: not in enabled drivers build config 00:02:03.755 dma/cnxk: not in enabled drivers build config 00:02:03.755 dma/dpaa: not in enabled drivers build config 00:02:03.755 dma/dpaa2: not in enabled drivers build config 00:02:03.755 dma/hisilicon: not in enabled drivers build config 00:02:03.755 dma/idxd: not in enabled drivers build config 00:02:03.755 dma/ioat: not in enabled drivers build config 00:02:03.755 dma/skeleton: not in enabled drivers build config 00:02:03.755 net/af_packet: not in enabled drivers build config 00:02:03.755 net/af_xdp: not in enabled drivers build config 00:02:03.755 net/ark: not in enabled drivers build config 00:02:03.755 net/atlantic: not in enabled drivers build config 00:02:03.755 net/avp: not in enabled drivers build config 00:02:03.755 net/axgbe: not in enabled drivers build config 00:02:03.755 net/bnx2x: not in enabled drivers build config 00:02:03.755 net/bnxt: not in enabled drivers build config 00:02:03.755 net/bonding: not in enabled drivers build config 00:02:03.755 net/cnxk: not in enabled drivers build config 00:02:03.755 net/cpfl: not in enabled drivers build config 00:02:03.755 net/cxgbe: not in enabled drivers build config 00:02:03.755 net/dpaa: not in enabled drivers build config 00:02:03.755 net/dpaa2: not in enabled drivers build config 00:02:03.755 net/e1000: not in enabled drivers build config 00:02:03.755 net/ena: not in enabled drivers build config 00:02:03.755 net/enetc: not in enabled drivers build config 00:02:03.755 net/enetfec: not in enabled drivers build config 00:02:03.755 net/enic: not in enabled drivers build config 00:02:03.755 net/failsafe: not in enabled drivers build config 00:02:03.755 net/fm10k: not in enabled drivers build config 00:02:03.755 net/gve: not in enabled drivers build config 00:02:03.755 net/hinic: not in enabled drivers build config 00:02:03.755 net/hns3: not in enabled drivers build config 00:02:03.755 net/i40e: not in enabled drivers build config 00:02:03.755 net/iavf: not in enabled drivers build config 00:02:03.755 net/ice: not in enabled drivers build config 00:02:03.755 net/idpf: not in enabled drivers build config 00:02:03.755 net/igc: not in enabled drivers build config 00:02:03.755 net/ionic: not in enabled drivers build config 00:02:03.755 net/ipn3ke: not in enabled drivers build config 00:02:03.755 net/ixgbe: not in enabled drivers build config 00:02:03.755 net/mana: not in enabled drivers build config 00:02:03.755 net/memif: not in enabled drivers build config 00:02:03.755 net/mlx4: not in enabled drivers build config 00:02:03.755 net/mlx5: not in enabled drivers build config 00:02:03.755 net/mvneta: not in enabled drivers build config 00:02:03.755 net/mvpp2: not in enabled drivers build config 00:02:03.755 net/netvsc: not in enabled drivers build config 00:02:03.755 net/nfb: not in enabled drivers build config 00:02:03.755 net/nfp: not in enabled drivers build config 00:02:03.755 net/ngbe: not in enabled drivers build config 00:02:03.755 net/null: not in enabled drivers build config 00:02:03.755 net/octeontx: not in enabled drivers build config 00:02:03.755 net/octeon_ep: not in enabled drivers build config 00:02:03.755 net/pcap: not in enabled drivers build config 00:02:03.755 net/pfe: not in enabled drivers build config 00:02:03.755 net/qede: not in enabled drivers build config 00:02:03.755 net/ring: not in enabled drivers build config 00:02:03.755 net/sfc: not in enabled drivers build config 00:02:03.755 net/softnic: not in enabled drivers build config 00:02:03.755 net/tap: not in enabled drivers build config 00:02:03.755 net/thunderx: not in enabled drivers build config 00:02:03.755 net/txgbe: not in enabled drivers build config 00:02:03.755 net/vdev_netvsc: not in enabled drivers build config 00:02:03.755 net/vhost: not in enabled drivers build config 00:02:03.755 net/virtio: not in enabled drivers build config 00:02:03.755 net/vmxnet3: not in enabled drivers build config 00:02:03.755 raw/*: missing internal dependency, "rawdev" 00:02:03.755 crypto/armv8: not in enabled drivers build config 00:02:03.755 crypto/bcmfs: not in enabled drivers build config 00:02:03.755 crypto/caam_jr: not in enabled drivers build config 00:02:03.755 crypto/ccp: not in enabled drivers build config 00:02:03.755 crypto/cnxk: not in enabled drivers build config 00:02:03.755 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.755 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.755 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.755 crypto/mlx5: not in enabled drivers build config 00:02:03.755 crypto/mvsam: not in enabled drivers build config 00:02:03.755 crypto/nitrox: not in enabled drivers build config 00:02:03.755 crypto/null: not in enabled drivers build config 00:02:03.755 crypto/octeontx: not in enabled drivers build config 00:02:03.755 crypto/openssl: not in enabled drivers build config 00:02:03.755 crypto/scheduler: not in enabled drivers build config 00:02:03.755 crypto/uadk: not in enabled drivers build config 00:02:03.755 crypto/virtio: not in enabled drivers build config 00:02:03.755 compress/isal: not in enabled drivers build config 00:02:03.755 compress/mlx5: not in enabled drivers build config 00:02:03.755 compress/nitrox: not in enabled drivers build config 00:02:03.755 compress/octeontx: not in enabled drivers build config 00:02:03.755 compress/zlib: not in enabled drivers build config 00:02:03.755 regex/*: missing internal dependency, "regexdev" 00:02:03.755 ml/*: missing internal dependency, "mldev" 00:02:03.755 vdpa/ifc: not in enabled drivers build config 00:02:03.755 vdpa/mlx5: not in enabled drivers build config 00:02:03.755 vdpa/nfp: not in enabled drivers build config 00:02:03.755 vdpa/sfc: not in enabled drivers build config 00:02:03.755 event/*: missing internal dependency, "eventdev" 00:02:03.755 baseband/*: missing internal dependency, "bbdev" 00:02:03.755 gpu/*: missing internal dependency, "gpudev" 00:02:03.755 00:02:03.755 00:02:03.755 Build targets in project: 85 00:02:03.755 00:02:03.755 DPDK 24.03.0 00:02:03.755 00:02:03.755 User defined options 00:02:03.755 buildtype : debug 00:02:03.755 default_library : shared 00:02:03.755 libdir : lib 00:02:03.755 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:03.755 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.755 c_link_args : 00:02:03.755 cpu_instruction_set: native 00:02:03.755 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:03.755 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:03.755 enable_docs : false 00:02:03.755 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:03.755 enable_kmods : false 00:02:03.755 max_lcores : 128 00:02:03.755 tests : false 00:02:03.755 00:02:03.755 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.755 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:03.755 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.756 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.756 [3/268] Linking static target lib/librte_kvargs.a 00:02:03.756 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.756 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.756 [6/268] Linking static target lib/librte_log.a 00:02:04.015 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.015 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.015 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.015 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.274 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.274 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.274 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.274 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.274 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.274 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.274 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.274 [18/268] Linking static target lib/librte_telemetry.a 00:02:04.532 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.532 [20/268] Linking target lib/librte_log.so.24.1 00:02:04.791 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:04.791 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.791 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:05.050 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.050 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.050 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.050 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.309 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.309 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.309 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.309 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.309 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.309 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.309 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:05.309 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.568 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.568 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.827 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.827 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.086 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.086 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.086 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.086 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.086 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.086 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.345 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.345 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.345 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.345 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.606 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.865 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.865 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.865 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.123 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.123 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.381 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.381 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.381 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.381 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.381 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.381 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.640 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.899 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.899 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.159 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.159 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.159 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.159 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.159 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.159 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.418 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.418 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.418 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.418 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.729 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.729 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.729 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.990 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.990 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.990 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.990 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.990 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.249 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.249 [84/268] Linking static target lib/librte_ring.a 00:02:09.249 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.508 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.508 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.508 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.508 [89/268] Linking static target lib/librte_rcu.a 00:02:09.508 [90/268] Linking static target lib/librte_eal.a 00:02:09.508 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.767 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.767 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.767 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.767 [95/268] Linking static target lib/librte_mempool.a 00:02:09.767 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:10.026 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.026 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.026 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:10.285 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:10.285 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:10.285 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:10.285 [103/268] Linking static target lib/librte_mbuf.a 00:02:10.543 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:10.543 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:10.543 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:10.543 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:10.802 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:10.802 [109/268] Linking static target lib/librte_meter.a 00:02:10.802 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:10.802 [111/268] Linking static target lib/librte_net.a 00:02:11.060 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.060 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.319 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.319 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.319 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.319 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.319 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.578 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.836 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.836 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:12.095 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.095 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.095 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:12.354 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.354 [126/268] Linking static target lib/librte_pci.a 00:02:12.613 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:12.613 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.613 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.613 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.613 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:12.613 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.613 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.872 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.872 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:12.872 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.872 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.872 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.872 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.872 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.872 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.872 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.872 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.872 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.131 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.131 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.131 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.389 [148/268] Linking static target lib/librte_cmdline.a 00:02:13.389 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.389 [150/268] Linking static target lib/librte_ethdev.a 00:02:13.648 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.648 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.648 [153/268] Linking static target lib/librte_timer.a 00:02:13.648 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.648 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.648 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.906 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.906 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.164 [159/268] Linking static target lib/librte_hash.a 00:02:14.164 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.164 [161/268] Linking static target lib/librte_compressdev.a 00:02:14.422 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.422 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.422 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.422 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.680 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.680 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.938 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.938 [169/268] Linking static target lib/librte_dmadev.a 00:02:14.938 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.938 [171/268] Linking static target lib/librte_cryptodev.a 00:02:14.938 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.938 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.197 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.197 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.197 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.456 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.456 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.714 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.714 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.714 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.714 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.714 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.714 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.283 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.283 [186/268] Linking static target lib/librte_reorder.a 00:02:16.283 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.283 [188/268] Linking static target lib/librte_power.a 00:02:16.542 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.542 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.542 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.542 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.542 [193/268] Linking static target lib/librte_security.a 00:02:16.801 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.066 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.337 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.337 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.337 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.595 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.595 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.595 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.854 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.113 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.113 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.113 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.371 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.371 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.372 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.372 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.372 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.372 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.372 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.630 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.630 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.630 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.630 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.630 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.630 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.630 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.630 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:18.889 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.889 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.889 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.149 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.149 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.149 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.149 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.149 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.718 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.718 [230/268] Linking static target lib/librte_vhost.a 00:02:20.655 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.655 [232/268] Linking target lib/librte_eal.so.24.1 00:02:20.914 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:20.914 [234/268] Linking target lib/librte_pci.so.24.1 00:02:20.914 [235/268] Linking target lib/librte_meter.so.24.1 00:02:20.914 [236/268] Linking target lib/librte_timer.so.24.1 00:02:20.914 [237/268] Linking target lib/librte_ring.so.24.1 00:02:20.914 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:20.914 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.173 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.173 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.173 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.173 [243/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.173 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.173 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.173 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.173 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:21.173 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:21.173 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.432 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:21.432 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.432 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.432 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:21.692 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.692 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:21.692 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:21.692 [257/268] Linking target lib/librte_net.so.24.1 00:02:21.692 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.692 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.692 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.692 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.692 [262/268] Linking target lib/librte_hash.so.24.1 00:02:21.951 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:21.951 [264/268] Linking target lib/librte_security.so.24.1 00:02:21.951 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:21.951 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:21.951 [267/268] Linking target lib/librte_power.so.24.1 00:02:21.951 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:21.951 INFO: autodetecting backend as ninja 00:02:21.951 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.889 CC lib/ut/ut.o 00:02:43.889 CC lib/ut_mock/mock.o 00:02:43.889 CC lib/log/log.o 00:02:43.889 CC lib/log/log_flags.o 00:02:43.889 CC lib/log/log_deprecated.o 00:02:43.889 LIB libspdk_ut_mock.a 00:02:43.889 LIB libspdk_ut.a 00:02:43.889 SO libspdk_ut_mock.so.6.0 00:02:43.889 SO libspdk_ut.so.2.0 00:02:43.889 LIB libspdk_log.a 00:02:43.889 SO libspdk_log.so.7.1 00:02:43.889 SYMLINK libspdk_ut.so 00:02:43.889 SYMLINK libspdk_ut_mock.so 00:02:43.889 SYMLINK libspdk_log.so 00:02:43.889 CC lib/dma/dma.o 00:02:43.889 CXX lib/trace_parser/trace.o 00:02:43.889 CC lib/util/base64.o 00:02:43.889 CC lib/util/bit_array.o 00:02:43.889 CC lib/util/crc16.o 00:02:43.889 CC lib/util/cpuset.o 00:02:43.889 CC lib/util/crc32.o 00:02:43.889 CC lib/util/crc32c.o 00:02:43.889 CC lib/ioat/ioat.o 00:02:44.148 CC lib/util/crc32_ieee.o 00:02:44.148 CC lib/util/crc64.o 00:02:44.148 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.148 CC lib/vfio_user/host/vfio_user.o 00:02:44.148 CC lib/util/dif.o 00:02:44.148 LIB libspdk_dma.a 00:02:44.148 SO libspdk_dma.so.5.0 00:02:44.148 CC lib/util/fd.o 00:02:44.148 CC lib/util/fd_group.o 00:02:44.148 CC lib/util/file.o 00:02:44.148 CC lib/util/hexlify.o 00:02:44.407 SYMLINK libspdk_dma.so 00:02:44.407 CC lib/util/iov.o 00:02:44.407 LIB libspdk_ioat.a 00:02:44.407 CC lib/util/math.o 00:02:44.407 SO libspdk_ioat.so.7.0 00:02:44.407 LIB libspdk_vfio_user.a 00:02:44.407 CC lib/util/net.o 00:02:44.407 CC lib/util/pipe.o 00:02:44.407 SYMLINK libspdk_ioat.so 00:02:44.407 CC lib/util/strerror_tls.o 00:02:44.407 CC lib/util/string.o 00:02:44.407 SO libspdk_vfio_user.so.5.0 00:02:44.407 CC lib/util/uuid.o 00:02:44.407 CC lib/util/xor.o 00:02:44.407 SYMLINK libspdk_vfio_user.so 00:02:44.407 CC lib/util/zipf.o 00:02:44.407 CC lib/util/md5.o 00:02:44.666 LIB libspdk_util.a 00:02:44.925 SO libspdk_util.so.10.1 00:02:44.925 SYMLINK libspdk_util.so 00:02:45.183 LIB libspdk_trace_parser.a 00:02:45.183 SO libspdk_trace_parser.so.6.0 00:02:45.183 CC lib/env_dpdk/env.o 00:02:45.183 CC lib/vmd/vmd.o 00:02:45.183 CC lib/env_dpdk/memory.o 00:02:45.183 CC lib/vmd/led.o 00:02:45.183 CC lib/env_dpdk/pci.o 00:02:45.183 CC lib/idxd/idxd.o 00:02:45.183 CC lib/rdma_utils/rdma_utils.o 00:02:45.183 CC lib/conf/conf.o 00:02:45.183 CC lib/json/json_parse.o 00:02:45.183 SYMLINK libspdk_trace_parser.so 00:02:45.183 CC lib/idxd/idxd_user.o 00:02:45.442 CC lib/idxd/idxd_kernel.o 00:02:45.442 LIB libspdk_conf.a 00:02:45.442 CC lib/json/json_util.o 00:02:45.442 SO libspdk_conf.so.6.0 00:02:45.442 LIB libspdk_rdma_utils.a 00:02:45.442 SYMLINK libspdk_conf.so 00:02:45.442 CC lib/json/json_write.o 00:02:45.442 CC lib/env_dpdk/init.o 00:02:45.442 CC lib/env_dpdk/threads.o 00:02:45.442 SO libspdk_rdma_utils.so.1.0 00:02:45.442 CC lib/env_dpdk/pci_ioat.o 00:02:45.701 SYMLINK libspdk_rdma_utils.so 00:02:45.701 CC lib/env_dpdk/pci_virtio.o 00:02:45.701 CC lib/env_dpdk/pci_vmd.o 00:02:45.701 CC lib/env_dpdk/pci_idxd.o 00:02:45.701 CC lib/env_dpdk/pci_event.o 00:02:45.701 LIB libspdk_idxd.a 00:02:45.701 LIB libspdk_json.a 00:02:45.701 CC lib/env_dpdk/sigbus_handler.o 00:02:45.701 LIB libspdk_vmd.a 00:02:45.960 SO libspdk_idxd.so.12.1 00:02:45.960 CC lib/env_dpdk/pci_dpdk.o 00:02:45.960 SO libspdk_json.so.6.0 00:02:45.960 SO libspdk_vmd.so.6.0 00:02:45.960 CC lib/rdma_provider/common.o 00:02:45.960 SYMLINK libspdk_idxd.so 00:02:45.960 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.960 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.960 SYMLINK libspdk_json.so 00:02:45.960 SYMLINK libspdk_vmd.so 00:02:45.960 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.219 CC lib/jsonrpc/jsonrpc_server.o 00:02:46.219 CC lib/jsonrpc/jsonrpc_client.o 00:02:46.219 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:46.219 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:46.219 LIB libspdk_rdma_provider.a 00:02:46.219 SO libspdk_rdma_provider.so.7.0 00:02:46.219 SYMLINK libspdk_rdma_provider.so 00:02:46.477 LIB libspdk_jsonrpc.a 00:02:46.477 SO libspdk_jsonrpc.so.6.0 00:02:46.477 SYMLINK libspdk_jsonrpc.so 00:02:46.477 LIB libspdk_env_dpdk.a 00:02:46.736 SO libspdk_env_dpdk.so.15.1 00:02:46.736 CC lib/rpc/rpc.o 00:02:46.736 SYMLINK libspdk_env_dpdk.so 00:02:46.995 LIB libspdk_rpc.a 00:02:46.995 SO libspdk_rpc.so.6.0 00:02:46.995 SYMLINK libspdk_rpc.so 00:02:47.254 CC lib/keyring/keyring.o 00:02:47.254 CC lib/keyring/keyring_rpc.o 00:02:47.254 CC lib/notify/notify.o 00:02:47.254 CC lib/notify/notify_rpc.o 00:02:47.254 CC lib/trace/trace.o 00:02:47.254 CC lib/trace/trace_rpc.o 00:02:47.254 CC lib/trace/trace_flags.o 00:02:47.513 LIB libspdk_notify.a 00:02:47.513 LIB libspdk_keyring.a 00:02:47.513 SO libspdk_notify.so.6.0 00:02:47.513 SO libspdk_keyring.so.2.0 00:02:47.513 SYMLINK libspdk_notify.so 00:02:47.513 LIB libspdk_trace.a 00:02:47.513 SYMLINK libspdk_keyring.so 00:02:47.513 SO libspdk_trace.so.11.0 00:02:47.774 SYMLINK libspdk_trace.so 00:02:48.038 CC lib/sock/sock.o 00:02:48.039 CC lib/sock/sock_rpc.o 00:02:48.039 CC lib/thread/thread.o 00:02:48.039 CC lib/thread/iobuf.o 00:02:48.297 LIB libspdk_sock.a 00:02:48.556 SO libspdk_sock.so.10.0 00:02:48.556 SYMLINK libspdk_sock.so 00:02:48.815 CC lib/nvme/nvme_ctrlr.o 00:02:48.815 CC lib/nvme/nvme_fabric.o 00:02:48.815 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.815 CC lib/nvme/nvme_pcie_common.o 00:02:48.815 CC lib/nvme/nvme_ns_cmd.o 00:02:48.815 CC lib/nvme/nvme_pcie.o 00:02:48.815 CC lib/nvme/nvme_ns.o 00:02:48.815 CC lib/nvme/nvme_qpair.o 00:02:48.815 CC lib/nvme/nvme.o 00:02:49.383 LIB libspdk_thread.a 00:02:49.383 SO libspdk_thread.so.11.0 00:02:49.641 SYMLINK libspdk_thread.so 00:02:49.641 CC lib/nvme/nvme_quirks.o 00:02:49.641 CC lib/nvme/nvme_transport.o 00:02:49.641 CC lib/nvme/nvme_discovery.o 00:02:49.641 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.641 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.900 CC lib/nvme/nvme_tcp.o 00:02:49.900 CC lib/nvme/nvme_opal.o 00:02:49.900 CC lib/nvme/nvme_io_msg.o 00:02:49.900 CC lib/nvme/nvme_poll_group.o 00:02:50.164 CC lib/nvme/nvme_zns.o 00:02:50.425 CC lib/nvme/nvme_stubs.o 00:02:50.425 CC lib/nvme/nvme_auth.o 00:02:50.425 CC lib/accel/accel.o 00:02:50.425 CC lib/nvme/nvme_cuse.o 00:02:50.683 CC lib/blob/blobstore.o 00:02:50.684 CC lib/init/json_config.o 00:02:50.684 CC lib/nvme/nvme_rdma.o 00:02:50.943 CC lib/init/subsystem.o 00:02:50.943 CC lib/virtio/virtio.o 00:02:50.943 CC lib/fsdev/fsdev.o 00:02:51.202 CC lib/init/subsystem_rpc.o 00:02:51.202 CC lib/init/rpc.o 00:02:51.202 CC lib/virtio/virtio_vhost_user.o 00:02:51.202 CC lib/virtio/virtio_vfio_user.o 00:02:51.202 CC lib/fsdev/fsdev_io.o 00:02:51.202 LIB libspdk_init.a 00:02:51.460 CC lib/virtio/virtio_pci.o 00:02:51.460 SO libspdk_init.so.6.0 00:02:51.460 CC lib/fsdev/fsdev_rpc.o 00:02:51.460 SYMLINK libspdk_init.so 00:02:51.460 CC lib/accel/accel_rpc.o 00:02:51.460 CC lib/accel/accel_sw.o 00:02:51.460 CC lib/blob/request.o 00:02:51.460 CC lib/blob/zeroes.o 00:02:51.718 LIB libspdk_virtio.a 00:02:51.718 CC lib/blob/blob_bs_dev.o 00:02:51.718 CC lib/event/app.o 00:02:51.718 SO libspdk_virtio.so.7.0 00:02:51.718 LIB libspdk_fsdev.a 00:02:51.718 CC lib/event/reactor.o 00:02:51.718 CC lib/event/log_rpc.o 00:02:51.718 SO libspdk_fsdev.so.2.0 00:02:51.718 SYMLINK libspdk_virtio.so 00:02:51.718 CC lib/event/app_rpc.o 00:02:51.718 SYMLINK libspdk_fsdev.so 00:02:51.718 LIB libspdk_accel.a 00:02:51.978 SO libspdk_accel.so.16.0 00:02:51.978 CC lib/event/scheduler_static.o 00:02:51.978 SYMLINK libspdk_accel.so 00:02:51.978 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.978 LIB libspdk_nvme.a 00:02:52.236 CC lib/bdev/bdev.o 00:02:52.236 CC lib/bdev/bdev_rpc.o 00:02:52.236 CC lib/bdev/part.o 00:02:52.236 CC lib/bdev/bdev_zone.o 00:02:52.236 CC lib/bdev/scsi_nvme.o 00:02:52.236 LIB libspdk_event.a 00:02:52.236 SO libspdk_event.so.14.0 00:02:52.236 SO libspdk_nvme.so.15.0 00:02:52.236 SYMLINK libspdk_event.so 00:02:52.496 SYMLINK libspdk_nvme.so 00:02:52.755 LIB libspdk_fuse_dispatcher.a 00:02:52.755 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.755 SYMLINK libspdk_fuse_dispatcher.so 00:02:53.691 LIB libspdk_blob.a 00:02:53.691 SO libspdk_blob.so.11.0 00:02:53.691 SYMLINK libspdk_blob.so 00:02:53.951 CC lib/lvol/lvol.o 00:02:53.951 CC lib/blobfs/blobfs.o 00:02:53.951 CC lib/blobfs/tree.o 00:02:54.885 LIB libspdk_bdev.a 00:02:54.885 LIB libspdk_lvol.a 00:02:54.885 SO libspdk_bdev.so.17.0 00:02:54.885 LIB libspdk_blobfs.a 00:02:54.885 SO libspdk_lvol.so.10.0 00:02:54.885 SO libspdk_blobfs.so.10.0 00:02:55.144 SYMLINK libspdk_lvol.so 00:02:55.144 SYMLINK libspdk_bdev.so 00:02:55.144 SYMLINK libspdk_blobfs.so 00:02:55.144 CC lib/scsi/dev.o 00:02:55.144 CC lib/scsi/lun.o 00:02:55.144 CC lib/scsi/port.o 00:02:55.144 CC lib/scsi/scsi.o 00:02:55.144 CC lib/scsi/scsi_bdev.o 00:02:55.144 CC lib/scsi/scsi_pr.o 00:02:55.144 CC lib/nbd/nbd.o 00:02:55.144 CC lib/nvmf/ctrlr.o 00:02:55.144 CC lib/ublk/ublk.o 00:02:55.144 CC lib/ftl/ftl_core.o 00:02:55.402 CC lib/ftl/ftl_init.o 00:02:55.402 CC lib/scsi/scsi_rpc.o 00:02:55.402 CC lib/nvmf/ctrlr_discovery.o 00:02:55.660 CC lib/nvmf/ctrlr_bdev.o 00:02:55.660 CC lib/nvmf/subsystem.o 00:02:55.660 CC lib/nvmf/nvmf.o 00:02:55.660 CC lib/ftl/ftl_layout.o 00:02:55.660 CC lib/nbd/nbd_rpc.o 00:02:55.660 CC lib/scsi/task.o 00:02:55.660 CC lib/ftl/ftl_debug.o 00:02:55.919 LIB libspdk_nbd.a 00:02:55.919 SO libspdk_nbd.so.7.0 00:02:55.919 CC lib/ublk/ublk_rpc.o 00:02:55.919 LIB libspdk_scsi.a 00:02:55.919 CC lib/nvmf/nvmf_rpc.o 00:02:55.919 CC lib/nvmf/transport.o 00:02:55.919 SO libspdk_scsi.so.9.0 00:02:55.919 SYMLINK libspdk_nbd.so 00:02:55.919 CC lib/ftl/ftl_io.o 00:02:55.919 CC lib/nvmf/tcp.o 00:02:56.177 SYMLINK libspdk_scsi.so 00:02:56.177 CC lib/nvmf/stubs.o 00:02:56.177 LIB libspdk_ublk.a 00:02:56.177 SO libspdk_ublk.so.3.0 00:02:56.177 SYMLINK libspdk_ublk.so 00:02:56.177 CC lib/nvmf/mdns_server.o 00:02:56.177 CC lib/nvmf/rdma.o 00:02:56.177 CC lib/ftl/ftl_sb.o 00:02:56.435 CC lib/nvmf/auth.o 00:02:56.435 CC lib/ftl/ftl_l2p.o 00:02:56.715 CC lib/ftl/ftl_l2p_flat.o 00:02:56.715 CC lib/iscsi/conn.o 00:02:56.715 CC lib/ftl/ftl_nv_cache.o 00:02:56.715 CC lib/iscsi/init_grp.o 00:02:56.715 CC lib/iscsi/iscsi.o 00:02:56.715 CC lib/vhost/vhost.o 00:02:56.973 CC lib/ftl/ftl_band.o 00:02:56.973 CC lib/ftl/ftl_band_ops.o 00:02:57.232 CC lib/ftl/ftl_writer.o 00:02:57.232 CC lib/ftl/ftl_rq.o 00:02:57.232 CC lib/vhost/vhost_rpc.o 00:02:57.232 CC lib/vhost/vhost_scsi.o 00:02:57.232 CC lib/ftl/ftl_reloc.o 00:02:57.491 CC lib/ftl/ftl_l2p_cache.o 00:02:57.491 CC lib/vhost/vhost_blk.o 00:02:57.491 CC lib/iscsi/param.o 00:02:57.749 CC lib/ftl/ftl_p2l.o 00:02:57.749 CC lib/ftl/ftl_p2l_log.o 00:02:57.749 CC lib/ftl/mngt/ftl_mngt.o 00:02:57.749 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.008 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.008 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.008 CC lib/iscsi/portal_grp.o 00:02:58.008 CC lib/iscsi/tgt_node.o 00:02:58.008 CC lib/iscsi/iscsi_subsystem.o 00:02:58.008 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.266 CC lib/iscsi/iscsi_rpc.o 00:02:58.266 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.266 CC lib/iscsi/task.o 00:02:58.266 CC lib/vhost/rte_vhost_user.o 00:02:58.266 LIB libspdk_nvmf.a 00:02:58.266 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.524 SO libspdk_nvmf.so.20.0 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.524 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.524 CC lib/ftl/utils/ftl_conf.o 00:02:58.524 LIB libspdk_iscsi.a 00:02:58.524 SYMLINK libspdk_nvmf.so 00:02:58.524 CC lib/ftl/utils/ftl_md.o 00:02:58.524 CC lib/ftl/utils/ftl_mempool.o 00:02:58.783 SO libspdk_iscsi.so.8.0 00:02:58.783 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.783 CC lib/ftl/utils/ftl_property.o 00:02:58.783 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.783 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.783 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.783 SYMLINK libspdk_iscsi.so 00:02:58.783 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.783 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:59.042 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:59.042 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:59.042 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.042 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.042 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.042 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.042 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:59.042 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:59.042 CC lib/ftl/base/ftl_base_dev.o 00:02:59.042 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.300 CC lib/ftl/ftl_trace.o 00:02:59.300 LIB libspdk_vhost.a 00:02:59.300 SO libspdk_vhost.so.8.0 00:02:59.558 LIB libspdk_ftl.a 00:02:59.558 SYMLINK libspdk_vhost.so 00:02:59.558 SO libspdk_ftl.so.9.0 00:02:59.816 SYMLINK libspdk_ftl.so 00:03:00.404 CC module/env_dpdk/env_dpdk_rpc.o 00:03:00.404 CC module/fsdev/aio/fsdev_aio.o 00:03:00.404 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:00.404 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.404 CC module/scheduler/gscheduler/gscheduler.o 00:03:00.404 CC module/keyring/file/keyring.o 00:03:00.404 CC module/keyring/linux/keyring.o 00:03:00.404 CC module/accel/error/accel_error.o 00:03:00.404 CC module/blob/bdev/blob_bdev.o 00:03:00.404 CC module/sock/posix/posix.o 00:03:00.404 LIB libspdk_env_dpdk_rpc.a 00:03:00.404 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.404 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.404 CC module/keyring/linux/keyring_rpc.o 00:03:00.404 CC module/keyring/file/keyring_rpc.o 00:03:00.404 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.404 LIB libspdk_scheduler_gscheduler.a 00:03:00.404 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:00.404 SO libspdk_scheduler_gscheduler.so.4.0 00:03:00.404 LIB libspdk_scheduler_dynamic.a 00:03:00.404 CC module/accel/error/accel_error_rpc.o 00:03:00.404 SO libspdk_scheduler_dynamic.so.4.0 00:03:00.662 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:00.662 SYMLINK libspdk_scheduler_gscheduler.so 00:03:00.662 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:00.662 SYMLINK libspdk_scheduler_dynamic.so 00:03:00.662 LIB libspdk_keyring_linux.a 00:03:00.662 LIB libspdk_blob_bdev.a 00:03:00.662 LIB libspdk_keyring_file.a 00:03:00.662 SO libspdk_keyring_linux.so.1.0 00:03:00.662 SO libspdk_blob_bdev.so.11.0 00:03:00.662 SO libspdk_keyring_file.so.2.0 00:03:00.662 CC module/sock/uring/uring.o 00:03:00.662 LIB libspdk_accel_error.a 00:03:00.662 SYMLINK libspdk_keyring_file.so 00:03:00.662 SYMLINK libspdk_keyring_linux.so 00:03:00.662 SYMLINK libspdk_blob_bdev.so 00:03:00.662 CC module/fsdev/aio/linux_aio_mgr.o 00:03:00.662 SO libspdk_accel_error.so.2.0 00:03:00.662 CC module/accel/ioat/accel_ioat.o 00:03:00.662 CC module/accel/dsa/accel_dsa.o 00:03:00.662 SYMLINK libspdk_accel_error.so 00:03:00.920 CC module/accel/dsa/accel_dsa_rpc.o 00:03:00.920 CC module/accel/iaa/accel_iaa.o 00:03:00.920 CC module/accel/iaa/accel_iaa_rpc.o 00:03:00.920 LIB libspdk_fsdev_aio.a 00:03:00.920 CC module/accel/ioat/accel_ioat_rpc.o 00:03:00.920 SO libspdk_fsdev_aio.so.1.0 00:03:00.920 CC module/bdev/delay/vbdev_delay.o 00:03:00.920 LIB libspdk_sock_posix.a 00:03:00.920 CC module/blobfs/bdev/blobfs_bdev.o 00:03:01.178 SYMLINK libspdk_fsdev_aio.so 00:03:01.178 LIB libspdk_accel_dsa.a 00:03:01.178 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:01.178 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:01.178 SO libspdk_sock_posix.so.6.0 00:03:01.178 SO libspdk_accel_dsa.so.5.0 00:03:01.178 LIB libspdk_accel_ioat.a 00:03:01.178 LIB libspdk_accel_iaa.a 00:03:01.178 CC module/bdev/error/vbdev_error.o 00:03:01.178 SO libspdk_accel_ioat.so.6.0 00:03:01.178 SO libspdk_accel_iaa.so.3.0 00:03:01.178 SYMLINK libspdk_sock_posix.so 00:03:01.178 SYMLINK libspdk_accel_dsa.so 00:03:01.178 CC module/bdev/error/vbdev_error_rpc.o 00:03:01.178 SYMLINK libspdk_accel_ioat.so 00:03:01.178 SYMLINK libspdk_accel_iaa.so 00:03:01.178 LIB libspdk_blobfs_bdev.a 00:03:01.178 SO libspdk_blobfs_bdev.so.6.0 00:03:01.437 LIB libspdk_sock_uring.a 00:03:01.437 CC module/bdev/gpt/gpt.o 00:03:01.437 SO libspdk_sock_uring.so.5.0 00:03:01.437 SYMLINK libspdk_blobfs_bdev.so 00:03:01.437 CC module/bdev/lvol/vbdev_lvol.o 00:03:01.437 CC module/bdev/malloc/bdev_malloc.o 00:03:01.437 LIB libspdk_bdev_delay.a 00:03:01.437 CC module/bdev/null/bdev_null.o 00:03:01.437 CC module/bdev/nvme/bdev_nvme.o 00:03:01.437 LIB libspdk_bdev_error.a 00:03:01.437 SYMLINK libspdk_sock_uring.so 00:03:01.437 SO libspdk_bdev_delay.so.6.0 00:03:01.437 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:01.437 SO libspdk_bdev_error.so.6.0 00:03:01.437 SYMLINK libspdk_bdev_delay.so 00:03:01.437 CC module/bdev/gpt/vbdev_gpt.o 00:03:01.437 SYMLINK libspdk_bdev_error.so 00:03:01.437 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:01.437 CC module/bdev/passthru/vbdev_passthru.o 00:03:01.437 CC module/bdev/raid/bdev_raid.o 00:03:01.437 CC module/bdev/raid/bdev_raid_rpc.o 00:03:01.695 CC module/bdev/null/bdev_null_rpc.o 00:03:01.695 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:01.695 LIB libspdk_bdev_gpt.a 00:03:01.695 CC module/bdev/raid/bdev_raid_sb.o 00:03:01.695 SO libspdk_bdev_gpt.so.6.0 00:03:01.953 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:01.953 LIB libspdk_bdev_null.a 00:03:01.953 SYMLINK libspdk_bdev_gpt.so 00:03:01.953 SO libspdk_bdev_null.so.6.0 00:03:01.953 LIB libspdk_bdev_lvol.a 00:03:01.953 LIB libspdk_bdev_malloc.a 00:03:01.953 SO libspdk_bdev_lvol.so.6.0 00:03:01.953 SYMLINK libspdk_bdev_null.so 00:03:01.953 SO libspdk_bdev_malloc.so.6.0 00:03:01.953 SYMLINK libspdk_bdev_lvol.so 00:03:01.953 LIB libspdk_bdev_passthru.a 00:03:01.953 SYMLINK libspdk_bdev_malloc.so 00:03:01.953 CC module/bdev/split/vbdev_split.o 00:03:01.953 SO libspdk_bdev_passthru.so.6.0 00:03:01.953 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:01.953 CC module/bdev/split/vbdev_split_rpc.o 00:03:01.953 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:02.211 SYMLINK libspdk_bdev_passthru.so 00:03:02.211 CC module/bdev/uring/bdev_uring.o 00:03:02.211 CC module/bdev/aio/bdev_aio.o 00:03:02.211 CC module/bdev/ftl/bdev_ftl.o 00:03:02.211 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:02.211 CC module/bdev/uring/bdev_uring_rpc.o 00:03:02.211 LIB libspdk_bdev_split.a 00:03:02.211 CC module/bdev/iscsi/bdev_iscsi.o 00:03:02.211 SO libspdk_bdev_split.so.6.0 00:03:02.469 SYMLINK libspdk_bdev_split.so 00:03:02.469 LIB libspdk_bdev_zone_block.a 00:03:02.469 SO libspdk_bdev_zone_block.so.6.0 00:03:02.469 CC module/bdev/nvme/nvme_rpc.o 00:03:02.469 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.469 LIB libspdk_bdev_ftl.a 00:03:02.469 LIB libspdk_bdev_uring.a 00:03:02.469 SYMLINK libspdk_bdev_zone_block.so 00:03:02.469 CC module/bdev/raid/raid0.o 00:03:02.469 CC module/bdev/aio/bdev_aio_rpc.o 00:03:02.469 SO libspdk_bdev_ftl.so.6.0 00:03:02.469 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:02.469 SO libspdk_bdev_uring.so.6.0 00:03:02.469 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:02.727 SYMLINK libspdk_bdev_ftl.so 00:03:02.727 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:02.727 SYMLINK libspdk_bdev_uring.so 00:03:02.727 CC module/bdev/raid/raid1.o 00:03:02.727 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:02.727 LIB libspdk_bdev_aio.a 00:03:02.727 CC module/bdev/nvme/vbdev_opal.o 00:03:02.727 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:02.727 SO libspdk_bdev_aio.so.6.0 00:03:02.727 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:02.727 LIB libspdk_bdev_iscsi.a 00:03:02.727 SYMLINK libspdk_bdev_aio.so 00:03:02.727 CC module/bdev/raid/concat.o 00:03:02.985 SO libspdk_bdev_iscsi.so.6.0 00:03:02.985 SYMLINK libspdk_bdev_iscsi.so 00:03:02.985 LIB libspdk_bdev_raid.a 00:03:02.985 LIB libspdk_bdev_virtio.a 00:03:03.243 SO libspdk_bdev_virtio.so.6.0 00:03:03.243 SO libspdk_bdev_raid.so.6.0 00:03:03.243 SYMLINK libspdk_bdev_virtio.so 00:03:03.243 SYMLINK libspdk_bdev_raid.so 00:03:03.830 LIB libspdk_bdev_nvme.a 00:03:03.830 SO libspdk_bdev_nvme.so.7.1 00:03:03.830 SYMLINK libspdk_bdev_nvme.so 00:03:04.396 CC module/event/subsystems/vmd/vmd.o 00:03:04.396 CC module/event/subsystems/sock/sock.o 00:03:04.396 CC module/event/subsystems/scheduler/scheduler.o 00:03:04.396 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:04.396 CC module/event/subsystems/iobuf/iobuf.o 00:03:04.396 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:04.396 CC module/event/subsystems/keyring/keyring.o 00:03:04.396 CC module/event/subsystems/fsdev/fsdev.o 00:03:04.396 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:04.654 LIB libspdk_event_vmd.a 00:03:04.654 LIB libspdk_event_keyring.a 00:03:04.654 LIB libspdk_event_sock.a 00:03:04.654 LIB libspdk_event_fsdev.a 00:03:04.654 LIB libspdk_event_vhost_blk.a 00:03:04.654 LIB libspdk_event_scheduler.a 00:03:04.654 SO libspdk_event_vmd.so.6.0 00:03:04.654 LIB libspdk_event_iobuf.a 00:03:04.654 SO libspdk_event_keyring.so.1.0 00:03:04.654 SO libspdk_event_scheduler.so.4.0 00:03:04.654 SO libspdk_event_vhost_blk.so.3.0 00:03:04.654 SO libspdk_event_fsdev.so.1.0 00:03:04.654 SO libspdk_event_sock.so.5.0 00:03:04.654 SO libspdk_event_iobuf.so.3.0 00:03:04.654 SYMLINK libspdk_event_vhost_blk.so 00:03:04.655 SYMLINK libspdk_event_keyring.so 00:03:04.655 SYMLINK libspdk_event_vmd.so 00:03:04.655 SYMLINK libspdk_event_fsdev.so 00:03:04.655 SYMLINK libspdk_event_scheduler.so 00:03:04.655 SYMLINK libspdk_event_sock.so 00:03:04.655 SYMLINK libspdk_event_iobuf.so 00:03:04.913 CC module/event/subsystems/accel/accel.o 00:03:05.171 LIB libspdk_event_accel.a 00:03:05.171 SO libspdk_event_accel.so.6.0 00:03:05.171 SYMLINK libspdk_event_accel.so 00:03:05.430 CC module/event/subsystems/bdev/bdev.o 00:03:05.430 LIB libspdk_event_bdev.a 00:03:05.688 SO libspdk_event_bdev.so.6.0 00:03:05.688 SYMLINK libspdk_event_bdev.so 00:03:05.947 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.947 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.947 CC module/event/subsystems/nbd/nbd.o 00:03:05.947 CC module/event/subsystems/scsi/scsi.o 00:03:05.947 CC module/event/subsystems/ublk/ublk.o 00:03:05.947 LIB libspdk_event_ublk.a 00:03:05.947 LIB libspdk_event_nbd.a 00:03:05.947 LIB libspdk_event_scsi.a 00:03:05.947 SO libspdk_event_ublk.so.3.0 00:03:05.947 SO libspdk_event_nbd.so.6.0 00:03:06.205 SO libspdk_event_scsi.so.6.0 00:03:06.205 SYMLINK libspdk_event_nbd.so 00:03:06.205 SYMLINK libspdk_event_ublk.so 00:03:06.205 LIB libspdk_event_nvmf.a 00:03:06.205 SYMLINK libspdk_event_scsi.so 00:03:06.205 SO libspdk_event_nvmf.so.6.0 00:03:06.205 SYMLINK libspdk_event_nvmf.so 00:03:06.463 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:06.463 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.464 LIB libspdk_event_vhost_scsi.a 00:03:06.722 LIB libspdk_event_iscsi.a 00:03:06.722 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.722 SO libspdk_event_iscsi.so.6.0 00:03:06.722 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.722 SYMLINK libspdk_event_iscsi.so 00:03:06.980 SO libspdk.so.6.0 00:03:06.980 SYMLINK libspdk.so 00:03:07.238 CXX app/trace/trace.o 00:03:07.238 CC test/rpc_client/rpc_client_test.o 00:03:07.238 TEST_HEADER include/spdk/accel.h 00:03:07.238 TEST_HEADER include/spdk/accel_module.h 00:03:07.238 TEST_HEADER include/spdk/assert.h 00:03:07.238 TEST_HEADER include/spdk/barrier.h 00:03:07.238 TEST_HEADER include/spdk/base64.h 00:03:07.238 TEST_HEADER include/spdk/bdev.h 00:03:07.238 TEST_HEADER include/spdk/bdev_module.h 00:03:07.238 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.238 TEST_HEADER include/spdk/bit_array.h 00:03:07.238 TEST_HEADER include/spdk/bit_pool.h 00:03:07.238 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.238 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:07.238 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.238 TEST_HEADER include/spdk/blobfs.h 00:03:07.238 TEST_HEADER include/spdk/blob.h 00:03:07.238 TEST_HEADER include/spdk/conf.h 00:03:07.238 TEST_HEADER include/spdk/config.h 00:03:07.238 TEST_HEADER include/spdk/cpuset.h 00:03:07.238 TEST_HEADER include/spdk/crc16.h 00:03:07.238 TEST_HEADER include/spdk/crc32.h 00:03:07.238 TEST_HEADER include/spdk/crc64.h 00:03:07.238 TEST_HEADER include/spdk/dif.h 00:03:07.238 TEST_HEADER include/spdk/dma.h 00:03:07.238 TEST_HEADER include/spdk/endian.h 00:03:07.238 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.238 TEST_HEADER include/spdk/env.h 00:03:07.238 TEST_HEADER include/spdk/event.h 00:03:07.238 TEST_HEADER include/spdk/fd_group.h 00:03:07.238 TEST_HEADER include/spdk/fd.h 00:03:07.238 TEST_HEADER include/spdk/file.h 00:03:07.238 CC examples/util/zipf/zipf.o 00:03:07.238 TEST_HEADER include/spdk/fsdev.h 00:03:07.238 TEST_HEADER include/spdk/fsdev_module.h 00:03:07.238 TEST_HEADER include/spdk/ftl.h 00:03:07.238 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:07.238 CC examples/ioat/perf/perf.o 00:03:07.238 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.238 CC test/thread/poller_perf/poller_perf.o 00:03:07.238 TEST_HEADER include/spdk/hexlify.h 00:03:07.238 TEST_HEADER include/spdk/histogram_data.h 00:03:07.238 TEST_HEADER include/spdk/idxd.h 00:03:07.238 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.238 TEST_HEADER include/spdk/init.h 00:03:07.238 TEST_HEADER include/spdk/ioat.h 00:03:07.238 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.238 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.238 TEST_HEADER include/spdk/json.h 00:03:07.238 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.238 TEST_HEADER include/spdk/keyring.h 00:03:07.238 TEST_HEADER include/spdk/keyring_module.h 00:03:07.238 TEST_HEADER include/spdk/likely.h 00:03:07.238 TEST_HEADER include/spdk/log.h 00:03:07.238 TEST_HEADER include/spdk/lvol.h 00:03:07.238 TEST_HEADER include/spdk/md5.h 00:03:07.238 TEST_HEADER include/spdk/memory.h 00:03:07.238 TEST_HEADER include/spdk/mmio.h 00:03:07.238 TEST_HEADER include/spdk/nbd.h 00:03:07.238 TEST_HEADER include/spdk/net.h 00:03:07.238 TEST_HEADER include/spdk/notify.h 00:03:07.238 TEST_HEADER include/spdk/nvme.h 00:03:07.238 TEST_HEADER include/spdk/nvme_intel.h 00:03:07.238 CC test/app/bdev_svc/bdev_svc.o 00:03:07.238 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:07.238 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:07.239 TEST_HEADER include/spdk/nvme_spec.h 00:03:07.239 CC test/dma/test_dma/test_dma.o 00:03:07.239 TEST_HEADER include/spdk/nvme_zns.h 00:03:07.239 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:07.239 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:07.239 TEST_HEADER include/spdk/nvmf.h 00:03:07.239 TEST_HEADER include/spdk/nvmf_spec.h 00:03:07.239 TEST_HEADER include/spdk/nvmf_transport.h 00:03:07.239 TEST_HEADER include/spdk/opal.h 00:03:07.239 TEST_HEADER include/spdk/opal_spec.h 00:03:07.239 TEST_HEADER include/spdk/pci_ids.h 00:03:07.239 TEST_HEADER include/spdk/pipe.h 00:03:07.239 TEST_HEADER include/spdk/queue.h 00:03:07.239 TEST_HEADER include/spdk/reduce.h 00:03:07.239 TEST_HEADER include/spdk/rpc.h 00:03:07.239 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.239 TEST_HEADER include/spdk/scheduler.h 00:03:07.239 TEST_HEADER include/spdk/scsi.h 00:03:07.239 TEST_HEADER include/spdk/scsi_spec.h 00:03:07.239 TEST_HEADER include/spdk/sock.h 00:03:07.239 TEST_HEADER include/spdk/stdinc.h 00:03:07.239 TEST_HEADER include/spdk/string.h 00:03:07.239 TEST_HEADER include/spdk/thread.h 00:03:07.239 TEST_HEADER include/spdk/trace.h 00:03:07.239 TEST_HEADER include/spdk/trace_parser.h 00:03:07.497 TEST_HEADER include/spdk/tree.h 00:03:07.497 LINK rpc_client_test 00:03:07.497 TEST_HEADER include/spdk/ublk.h 00:03:07.497 TEST_HEADER include/spdk/util.h 00:03:07.497 TEST_HEADER include/spdk/uuid.h 00:03:07.497 TEST_HEADER include/spdk/version.h 00:03:07.497 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:07.497 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:07.497 TEST_HEADER include/spdk/vhost.h 00:03:07.497 TEST_HEADER include/spdk/vmd.h 00:03:07.497 TEST_HEADER include/spdk/xor.h 00:03:07.497 TEST_HEADER include/spdk/zipf.h 00:03:07.497 CXX test/cpp_headers/accel.o 00:03:07.497 LINK zipf 00:03:07.497 LINK poller_perf 00:03:07.497 LINK interrupt_tgt 00:03:07.497 CXX test/cpp_headers/accel_module.o 00:03:07.497 LINK bdev_svc 00:03:07.497 LINK ioat_perf 00:03:07.497 CXX test/cpp_headers/assert.o 00:03:07.755 LINK spdk_trace 00:03:07.755 CXX test/cpp_headers/barrier.o 00:03:07.755 CC test/event/event_perf/event_perf.o 00:03:07.755 CC examples/sock/hello_world/hello_sock.o 00:03:07.755 CC examples/ioat/verify/verify.o 00:03:07.755 CC test/event/reactor/reactor.o 00:03:07.755 CC examples/thread/thread/thread_ex.o 00:03:07.755 LINK test_dma 00:03:08.014 CC app/trace_record/trace_record.o 00:03:08.014 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:08.014 LINK event_perf 00:03:08.014 CXX test/cpp_headers/base64.o 00:03:08.014 LINK reactor 00:03:08.014 LINK mem_callbacks 00:03:08.014 LINK verify 00:03:08.014 LINK hello_sock 00:03:08.014 LINK thread 00:03:08.272 CXX test/cpp_headers/bdev.o 00:03:08.272 CC test/app/histogram_perf/histogram_perf.o 00:03:08.272 CC test/app/jsoncat/jsoncat.o 00:03:08.272 LINK spdk_trace_record 00:03:08.272 CC test/env/vtophys/vtophys.o 00:03:08.272 CC test/event/reactor_perf/reactor_perf.o 00:03:08.272 CC test/event/app_repeat/app_repeat.o 00:03:08.272 CC test/event/scheduler/scheduler.o 00:03:08.272 LINK histogram_perf 00:03:08.272 LINK nvme_fuzz 00:03:08.272 CXX test/cpp_headers/bdev_module.o 00:03:08.272 LINK jsoncat 00:03:08.531 LINK vtophys 00:03:08.531 LINK reactor_perf 00:03:08.531 LINK app_repeat 00:03:08.531 CC examples/vmd/lsvmd/lsvmd.o 00:03:08.531 CC app/nvmf_tgt/nvmf_main.o 00:03:08.531 CXX test/cpp_headers/bdev_zone.o 00:03:08.531 CC examples/vmd/led/led.o 00:03:08.531 LINK scheduler 00:03:08.531 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.531 CC test/env/memory/memory_ut.o 00:03:08.531 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:08.531 LINK lsvmd 00:03:08.793 CC examples/idxd/perf/perf.o 00:03:08.793 CC test/env/pci/pci_ut.o 00:03:08.793 LINK nvmf_tgt 00:03:08.793 LINK led 00:03:08.793 CXX test/cpp_headers/bit_array.o 00:03:08.793 CXX test/cpp_headers/bit_pool.o 00:03:08.793 LINK env_dpdk_post_init 00:03:09.056 CXX test/cpp_headers/blob_bdev.o 00:03:09.057 CC test/accel/dif/dif.o 00:03:09.057 LINK idxd_perf 00:03:09.057 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.057 CC app/spdk_lspci/spdk_lspci.o 00:03:09.057 CC app/spdk_nvme_perf/perf.o 00:03:09.057 CXX test/cpp_headers/blobfs_bdev.o 00:03:09.057 LINK pci_ut 00:03:09.057 CC app/spdk_tgt/spdk_tgt.o 00:03:09.324 LINK spdk_lspci 00:03:09.324 LINK iscsi_tgt 00:03:09.324 CXX test/cpp_headers/blobfs.o 00:03:09.324 LINK spdk_tgt 00:03:09.324 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:09.581 CXX test/cpp_headers/blob.o 00:03:09.581 CC examples/accel/perf/accel_perf.o 00:03:09.581 CC test/app/stub/stub.o 00:03:09.581 CC examples/blob/hello_world/hello_blob.o 00:03:09.581 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:09.581 LINK dif 00:03:09.581 LINK hello_fsdev 00:03:09.581 CXX test/cpp_headers/conf.o 00:03:09.839 LINK stub 00:03:09.839 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:09.839 LINK memory_ut 00:03:09.839 LINK hello_blob 00:03:09.839 CXX test/cpp_headers/config.o 00:03:09.839 CXX test/cpp_headers/cpuset.o 00:03:10.097 LINK spdk_nvme_perf 00:03:10.097 CC app/spdk_nvme_identify/identify.o 00:03:10.097 LINK accel_perf 00:03:10.097 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.097 CC test/blobfs/mkfs/mkfs.o 00:03:10.097 CXX test/cpp_headers/crc16.o 00:03:10.097 CXX test/cpp_headers/crc32.o 00:03:10.097 CC examples/blob/cli/blobcli.o 00:03:10.358 LINK vhost_fuzz 00:03:10.358 LINK iscsi_fuzz 00:03:10.358 LINK spdk_nvme_discover 00:03:10.358 LINK mkfs 00:03:10.358 CXX test/cpp_headers/crc64.o 00:03:10.358 CC test/lvol/esnap/esnap.o 00:03:10.358 CC examples/nvme/hello_world/hello_world.o 00:03:10.358 CC examples/nvme/reconnect/reconnect.o 00:03:10.616 CXX test/cpp_headers/dif.o 00:03:10.616 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.616 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:10.616 LINK hello_world 00:03:10.616 CC test/nvme/aer/aer.o 00:03:10.616 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.616 CXX test/cpp_headers/dma.o 00:03:10.616 LINK blobcli 00:03:10.875 LINK hello_bdev 00:03:10.875 LINK spdk_nvme_identify 00:03:10.875 LINK reconnect 00:03:10.875 CXX test/cpp_headers/endian.o 00:03:10.875 CC examples/nvme/arbitration/arbitration.o 00:03:10.875 CXX test/cpp_headers/env_dpdk.o 00:03:10.875 LINK aer 00:03:11.134 LINK nvme_manage 00:03:11.134 CC app/spdk_top/spdk_top.o 00:03:11.134 CXX test/cpp_headers/env.o 00:03:11.134 CC test/nvme/reset/reset.o 00:03:11.134 CC test/nvme/sgl/sgl.o 00:03:11.134 CC test/nvme/e2edp/nvme_dp.o 00:03:11.134 CC test/bdev/bdevio/bdevio.o 00:03:11.134 LINK arbitration 00:03:11.134 CXX test/cpp_headers/event.o 00:03:11.392 CC examples/nvme/hotplug/hotplug.o 00:03:11.392 LINK reset 00:03:11.392 LINK sgl 00:03:11.392 CXX test/cpp_headers/fd_group.o 00:03:11.392 LINK bdevperf 00:03:11.392 LINK nvme_dp 00:03:11.651 CC app/vhost/vhost.o 00:03:11.651 LINK hotplug 00:03:11.651 CC test/nvme/overhead/overhead.o 00:03:11.651 CXX test/cpp_headers/fd.o 00:03:11.651 LINK bdevio 00:03:11.651 CC test/nvme/err_injection/err_injection.o 00:03:11.651 CC test/nvme/startup/startup.o 00:03:11.651 CC test/nvme/reserve/reserve.o 00:03:11.651 LINK vhost 00:03:11.909 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.909 CXX test/cpp_headers/file.o 00:03:11.909 CXX test/cpp_headers/fsdev.o 00:03:11.909 LINK err_injection 00:03:11.909 LINK spdk_top 00:03:11.909 LINK overhead 00:03:11.909 LINK startup 00:03:11.909 CXX test/cpp_headers/fsdev_module.o 00:03:11.909 LINK reserve 00:03:11.909 LINK cmb_copy 00:03:12.168 CC test/nvme/simple_copy/simple_copy.o 00:03:12.168 CC test/nvme/connect_stress/connect_stress.o 00:03:12.168 CXX test/cpp_headers/ftl.o 00:03:12.168 CC test/nvme/boot_partition/boot_partition.o 00:03:12.168 CC examples/nvme/abort/abort.o 00:03:12.168 CC test/nvme/compliance/nvme_compliance.o 00:03:12.168 CC app/spdk_dd/spdk_dd.o 00:03:12.168 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.168 CC app/fio/nvme/fio_plugin.o 00:03:12.427 LINK connect_stress 00:03:12.427 LINK boot_partition 00:03:12.427 LINK simple_copy 00:03:12.427 CXX test/cpp_headers/fuse_dispatcher.o 00:03:12.427 LINK pmr_persistence 00:03:12.686 LINK nvme_compliance 00:03:12.686 CXX test/cpp_headers/gpt_spec.o 00:03:12.686 LINK abort 00:03:12.686 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.686 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.686 LINK spdk_dd 00:03:12.686 CC app/fio/bdev/fio_plugin.o 00:03:12.686 CXX test/cpp_headers/hexlify.o 00:03:12.686 CC test/nvme/fdp/fdp.o 00:03:12.686 CC test/nvme/cuse/cuse.o 00:03:12.945 LINK doorbell_aers 00:03:12.945 LINK fused_ordering 00:03:12.945 LINK spdk_nvme 00:03:12.945 CXX test/cpp_headers/histogram_data.o 00:03:12.945 CXX test/cpp_headers/idxd.o 00:03:12.945 CXX test/cpp_headers/idxd_spec.o 00:03:12.945 CXX test/cpp_headers/init.o 00:03:12.945 CC examples/nvmf/nvmf/nvmf.o 00:03:12.945 CXX test/cpp_headers/ioat.o 00:03:12.945 CXX test/cpp_headers/ioat_spec.o 00:03:13.204 LINK fdp 00:03:13.204 CXX test/cpp_headers/iscsi_spec.o 00:03:13.204 CXX test/cpp_headers/json.o 00:03:13.204 CXX test/cpp_headers/jsonrpc.o 00:03:13.204 CXX test/cpp_headers/keyring.o 00:03:13.204 LINK spdk_bdev 00:03:13.204 CXX test/cpp_headers/keyring_module.o 00:03:13.204 CXX test/cpp_headers/likely.o 00:03:13.204 CXX test/cpp_headers/log.o 00:03:13.204 LINK nvmf 00:03:13.204 CXX test/cpp_headers/lvol.o 00:03:13.463 CXX test/cpp_headers/md5.o 00:03:13.463 CXX test/cpp_headers/memory.o 00:03:13.463 CXX test/cpp_headers/mmio.o 00:03:13.463 CXX test/cpp_headers/nbd.o 00:03:13.463 CXX test/cpp_headers/net.o 00:03:13.463 CXX test/cpp_headers/notify.o 00:03:13.463 CXX test/cpp_headers/nvme.o 00:03:13.463 CXX test/cpp_headers/nvme_intel.o 00:03:13.463 CXX test/cpp_headers/nvme_ocssd.o 00:03:13.463 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:13.463 CXX test/cpp_headers/nvme_spec.o 00:03:13.722 CXX test/cpp_headers/nvme_zns.o 00:03:13.722 CXX test/cpp_headers/nvmf_cmd.o 00:03:13.722 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:13.722 CXX test/cpp_headers/nvmf.o 00:03:13.722 CXX test/cpp_headers/nvmf_spec.o 00:03:13.722 CXX test/cpp_headers/nvmf_transport.o 00:03:13.722 CXX test/cpp_headers/opal.o 00:03:13.722 CXX test/cpp_headers/opal_spec.o 00:03:13.722 CXX test/cpp_headers/pci_ids.o 00:03:13.722 CXX test/cpp_headers/pipe.o 00:03:13.722 CXX test/cpp_headers/queue.o 00:03:13.980 CXX test/cpp_headers/reduce.o 00:03:13.980 CXX test/cpp_headers/rpc.o 00:03:13.980 CXX test/cpp_headers/scheduler.o 00:03:13.980 CXX test/cpp_headers/scsi.o 00:03:13.980 CXX test/cpp_headers/scsi_spec.o 00:03:13.980 CXX test/cpp_headers/sock.o 00:03:13.980 CXX test/cpp_headers/stdinc.o 00:03:13.980 CXX test/cpp_headers/string.o 00:03:13.980 CXX test/cpp_headers/thread.o 00:03:13.980 CXX test/cpp_headers/trace.o 00:03:13.980 CXX test/cpp_headers/trace_parser.o 00:03:13.980 CXX test/cpp_headers/tree.o 00:03:13.980 CXX test/cpp_headers/ublk.o 00:03:14.238 CXX test/cpp_headers/util.o 00:03:14.238 CXX test/cpp_headers/uuid.o 00:03:14.238 CXX test/cpp_headers/version.o 00:03:14.238 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.238 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.238 CXX test/cpp_headers/vhost.o 00:03:14.238 CXX test/cpp_headers/vmd.o 00:03:14.238 CXX test/cpp_headers/xor.o 00:03:14.238 LINK cuse 00:03:14.238 CXX test/cpp_headers/zipf.o 00:03:15.614 LINK esnap 00:03:15.614 00:03:15.615 real 1m25.455s 00:03:15.615 user 8m1.608s 00:03:15.615 sys 1m39.619s 00:03:15.615 13:13:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:15.615 13:13:04 make -- common/autotest_common.sh@10 -- $ set +x 00:03:15.615 ************************************ 00:03:15.615 END TEST make 00:03:15.615 ************************************ 00:03:15.874 13:13:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:15.874 13:13:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:15.874 13:13:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:15.874 13:13:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.874 13:13:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:15.874 13:13:04 -- pm/common@44 -- $ pid=5409 00:03:15.874 13:13:04 -- pm/common@50 -- $ kill -TERM 5409 00:03:15.874 13:13:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.874 13:13:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:15.874 13:13:04 -- pm/common@44 -- $ pid=5410 00:03:15.874 13:13:04 -- pm/common@50 -- $ kill -TERM 5410 00:03:15.874 13:13:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:15.874 13:13:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:15.874 13:13:04 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:15.874 13:13:04 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:15.874 13:13:04 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:15.874 13:13:05 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:15.874 13:13:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.874 13:13:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.874 13:13:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.874 13:13:05 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.874 13:13:05 -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.874 13:13:05 -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.874 13:13:05 -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.874 13:13:05 -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.874 13:13:05 -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.874 13:13:05 -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.874 13:13:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.874 13:13:05 -- scripts/common.sh@344 -- # case "$op" in 00:03:15.874 13:13:05 -- scripts/common.sh@345 -- # : 1 00:03:15.874 13:13:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.874 13:13:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.874 13:13:05 -- scripts/common.sh@365 -- # decimal 1 00:03:15.874 13:13:05 -- scripts/common.sh@353 -- # local d=1 00:03:15.874 13:13:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.874 13:13:05 -- scripts/common.sh@355 -- # echo 1 00:03:15.874 13:13:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.874 13:13:05 -- scripts/common.sh@366 -- # decimal 2 00:03:15.874 13:13:05 -- scripts/common.sh@353 -- # local d=2 00:03:15.874 13:13:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.874 13:13:05 -- scripts/common.sh@355 -- # echo 2 00:03:15.874 13:13:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.874 13:13:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.874 13:13:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.874 13:13:05 -- scripts/common.sh@368 -- # return 0 00:03:15.874 13:13:05 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.874 13:13:05 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:15.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.874 --rc genhtml_branch_coverage=1 00:03:15.874 --rc genhtml_function_coverage=1 00:03:15.874 --rc genhtml_legend=1 00:03:15.874 --rc geninfo_all_blocks=1 00:03:15.874 --rc geninfo_unexecuted_blocks=1 00:03:15.874 00:03:15.874 ' 00:03:15.874 13:13:05 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:15.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.874 --rc genhtml_branch_coverage=1 00:03:15.874 --rc genhtml_function_coverage=1 00:03:15.874 --rc genhtml_legend=1 00:03:15.874 --rc geninfo_all_blocks=1 00:03:15.874 --rc geninfo_unexecuted_blocks=1 00:03:15.874 00:03:15.874 ' 00:03:15.874 13:13:05 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:15.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.874 --rc genhtml_branch_coverage=1 00:03:15.874 --rc genhtml_function_coverage=1 00:03:15.874 --rc genhtml_legend=1 00:03:15.874 --rc geninfo_all_blocks=1 00:03:15.874 --rc geninfo_unexecuted_blocks=1 00:03:15.874 00:03:15.874 ' 00:03:15.874 13:13:05 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:15.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.874 --rc genhtml_branch_coverage=1 00:03:15.874 --rc genhtml_function_coverage=1 00:03:15.874 --rc genhtml_legend=1 00:03:15.874 --rc geninfo_all_blocks=1 00:03:15.874 --rc geninfo_unexecuted_blocks=1 00:03:15.874 00:03:15.874 ' 00:03:15.874 13:13:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:15.874 13:13:05 -- nvmf/common.sh@7 -- # uname -s 00:03:15.874 13:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.874 13:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.874 13:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.874 13:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.874 13:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.874 13:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.874 13:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.874 13:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.874 13:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:15.874 13:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:15.874 13:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:03:15.874 13:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:03:15.874 13:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:15.874 13:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:15.874 13:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:15.874 13:13:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:15.874 13:13:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:15.875 13:13:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:15.875 13:13:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:15.875 13:13:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.875 13:13:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.875 13:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.875 13:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.875 13:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.875 13:13:05 -- paths/export.sh@5 -- # export PATH 00:03:15.875 13:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.875 13:13:05 -- nvmf/common.sh@51 -- # : 0 00:03:15.875 13:13:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:15.875 13:13:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:15.875 13:13:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:15.875 13:13:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:15.875 13:13:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:15.875 13:13:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:15.875 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:15.875 13:13:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:15.875 13:13:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:15.875 13:13:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:15.875 13:13:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:15.875 13:13:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:15.875 13:13:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:15.875 13:13:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:15.875 13:13:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:15.875 13:13:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:15.875 13:13:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:15.875 13:13:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.133 13:13:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.133 13:13:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.133 13:13:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54461 00:03:16.133 13:13:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.133 13:13:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.133 13:13:05 -- pm/common@17 -- # local monitor 00:03:16.133 13:13:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.133 13:13:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.133 13:13:05 -- pm/common@25 -- # sleep 1 00:03:16.133 13:13:05 -- pm/common@21 -- # date +%s 00:03:16.133 13:13:05 -- pm/common@21 -- # date +%s 00:03:16.133 13:13:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731849185 00:03:16.133 13:13:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731849185 00:03:16.133 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731849185_collect-cpu-load.pm.log 00:03:16.133 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731849185_collect-vmstat.pm.log 00:03:17.069 13:13:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.069 13:13:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.069 13:13:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.069 13:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.069 13:13:06 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.069 13:13:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:17.069 13:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:17.069 13:13:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:17.069 13:13:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:17.069 13:13:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:17.069 13:13:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:17.069 13:13:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:17.069 13:13:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.069 13:13:06 -- common/autotest_common.sh@1457 -- # uname 00:03:17.069 13:13:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:17.069 13:13:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.069 13:13:06 -- common/autotest_common.sh@1477 -- # uname 00:03:17.069 13:13:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:17.069 13:13:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.069 13:13:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:17.069 lcov: LCOV version 1.15 00:03:17.328 13:13:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:32.256 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.256 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.141 13:13:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.141 13:13:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.141 13:13:35 -- common/autotest_common.sh@10 -- # set +x 00:03:47.141 13:13:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.141 13:13:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.400 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:47.660 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:47.660 13:13:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:47.660 13:13:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:47.660 13:13:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:47.660 13:13:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:47.660 13:13:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:47.660 13:13:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:47.660 13:13:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:47.660 13:13:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:47.660 13:13:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:47.660 13:13:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:47.660 13:13:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:47.660 13:13:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:47.660 13:13:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:47.660 13:13:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:47.660 13:13:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:47.660 13:13:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:47.660 13:13:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:47.660 13:13:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:47.660 13:13:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:47.660 13:13:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.660 13:13:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.660 13:13:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:47.660 13:13:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:47.660 13:13:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.660 No valid GPT data, bailing 00:03:47.660 13:13:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.660 13:13:36 -- scripts/common.sh@394 -- # pt= 00:03:47.660 13:13:36 -- scripts/common.sh@395 -- # return 1 00:03:47.660 13:13:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.660 1+0 records in 00:03:47.660 1+0 records out 00:03:47.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380227 s, 276 MB/s 00:03:47.660 13:13:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.660 13:13:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.660 13:13:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:47.660 13:13:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:47.661 13:13:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.661 No valid GPT data, bailing 00:03:47.661 13:13:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.661 13:13:36 -- scripts/common.sh@394 -- # pt= 00:03:47.661 13:13:36 -- scripts/common.sh@395 -- # return 1 00:03:47.661 13:13:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.661 1+0 records in 00:03:47.661 1+0 records out 00:03:47.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536116 s, 196 MB/s 00:03:47.661 13:13:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.661 13:13:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.661 13:13:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:47.661 13:13:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:47.661 13:13:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:47.661 No valid GPT data, bailing 00:03:47.661 13:13:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:47.919 13:13:36 -- scripts/common.sh@394 -- # pt= 00:03:47.919 13:13:36 -- scripts/common.sh@395 -- # return 1 00:03:47.919 13:13:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:47.919 1+0 records in 00:03:47.919 1+0 records out 00:03:47.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511778 s, 205 MB/s 00:03:47.919 13:13:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.919 13:13:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.919 13:13:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:47.919 13:13:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:47.919 13:13:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:47.919 No valid GPT data, bailing 00:03:47.919 13:13:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:47.919 13:13:36 -- scripts/common.sh@394 -- # pt= 00:03:47.919 13:13:36 -- scripts/common.sh@395 -- # return 1 00:03:47.919 13:13:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:47.919 1+0 records in 00:03:47.919 1+0 records out 00:03:47.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513821 s, 204 MB/s 00:03:47.919 13:13:36 -- spdk/autotest.sh@105 -- # sync 00:03:47.919 13:13:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.919 13:13:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.919 13:13:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.451 13:13:39 -- spdk/autotest.sh@111 -- # uname -s 00:03:50.451 13:13:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:50.451 13:13:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:50.451 13:13:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.710 Hugepages 00:03:50.710 node hugesize free / total 00:03:50.710 node0 1048576kB 0 / 0 00:03:50.710 node0 2048kB 0 / 0 00:03:50.710 00:03:50.710 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.710 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:50.710 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:50.968 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:50.968 13:13:39 -- spdk/autotest.sh@117 -- # uname -s 00:03:50.968 13:13:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:50.968 13:13:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:50.968 13:13:39 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.535 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.793 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.793 13:13:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:52.727 13:13:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:52.727 13:13:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:52.727 13:13:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:52.727 13:13:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:52.727 13:13:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:52.727 13:13:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:52.727 13:13:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.727 13:13:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:52.727 13:13:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:52.727 13:13:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:52.727 13:13:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:52.727 13:13:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.293 Waiting for block devices as requested 00:03:53.293 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.293 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.293 13:13:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:53.293 13:13:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:53.293 13:13:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:53.293 13:13:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:53.293 13:13:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:53.293 13:13:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:53.293 13:13:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:53.293 13:13:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:53.553 13:13:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:53.553 13:13:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:53.553 13:13:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:53.553 13:13:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1543 -- # continue 00:03:53.553 13:13:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:53.553 13:13:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.553 13:13:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:53.553 13:13:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:53.553 13:13:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:53.553 13:13:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:53.553 13:13:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:53.553 13:13:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:53.553 13:13:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:53.553 13:13:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:53.553 13:13:42 -- common/autotest_common.sh@1543 -- # continue 00:03:53.553 13:13:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:53.553 13:13:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.553 13:13:42 -- common/autotest_common.sh@10 -- # set +x 00:03:53.553 13:13:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:53.553 13:13:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.553 13:13:42 -- common/autotest_common.sh@10 -- # set +x 00:03:53.553 13:13:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.157 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.416 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.416 13:13:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:54.416 13:13:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:54.416 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.416 13:13:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:54.416 13:13:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:54.416 13:13:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.416 13:13:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:54.416 13:13:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:54.416 13:13:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:54.416 13:13:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:54.416 13:13:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:54.416 13:13:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:54.416 13:13:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:54.416 13:13:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.416 13:13:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:54.416 13:13:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:54.416 13:13:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:54.416 13:13:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:54.416 13:13:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:54.416 13:13:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:54.416 13:13:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:54.416 13:13:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.416 13:13:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:54.416 13:13:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:54.416 13:13:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:54.416 13:13:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.416 13:13:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:54.416 13:13:43 -- common/autotest_common.sh@1572 -- # return 0 00:03:54.416 13:13:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:54.416 13:13:43 -- common/autotest_common.sh@1580 -- # return 0 00:03:54.416 13:13:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:54.416 13:13:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:54.416 13:13:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:54.416 13:13:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:54.416 13:13:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:54.416 13:13:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.416 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.416 13:13:43 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:03:54.416 13:13:43 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:54.416 13:13:43 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:54.416 13:13:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.416 13:13:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.416 13:13:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.416 13:13:43 -- common/autotest_common.sh@10 -- # set +x 00:03:54.675 ************************************ 00:03:54.675 START TEST env 00:03:54.675 ************************************ 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.675 * Looking for test storage... 00:03:54.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.675 13:13:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.675 13:13:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.675 13:13:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.675 13:13:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.675 13:13:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.675 13:13:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.675 13:13:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.675 13:13:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.675 13:13:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.675 13:13:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.675 13:13:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.675 13:13:43 env -- scripts/common.sh@344 -- # case "$op" in 00:03:54.675 13:13:43 env -- scripts/common.sh@345 -- # : 1 00:03:54.675 13:13:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.675 13:13:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.675 13:13:43 env -- scripts/common.sh@365 -- # decimal 1 00:03:54.675 13:13:43 env -- scripts/common.sh@353 -- # local d=1 00:03:54.675 13:13:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.675 13:13:43 env -- scripts/common.sh@355 -- # echo 1 00:03:54.675 13:13:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.675 13:13:43 env -- scripts/common.sh@366 -- # decimal 2 00:03:54.675 13:13:43 env -- scripts/common.sh@353 -- # local d=2 00:03:54.675 13:13:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.675 13:13:43 env -- scripts/common.sh@355 -- # echo 2 00:03:54.675 13:13:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.675 13:13:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.675 13:13:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.675 13:13:43 env -- scripts/common.sh@368 -- # return 0 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.675 --rc genhtml_branch_coverage=1 00:03:54.675 --rc genhtml_function_coverage=1 00:03:54.675 --rc genhtml_legend=1 00:03:54.675 --rc geninfo_all_blocks=1 00:03:54.675 --rc geninfo_unexecuted_blocks=1 00:03:54.675 00:03:54.675 ' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.675 --rc genhtml_branch_coverage=1 00:03:54.675 --rc genhtml_function_coverage=1 00:03:54.675 --rc genhtml_legend=1 00:03:54.675 --rc geninfo_all_blocks=1 00:03:54.675 --rc geninfo_unexecuted_blocks=1 00:03:54.675 00:03:54.675 ' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.675 --rc genhtml_branch_coverage=1 00:03:54.675 --rc genhtml_function_coverage=1 00:03:54.675 --rc genhtml_legend=1 00:03:54.675 --rc geninfo_all_blocks=1 00:03:54.675 --rc geninfo_unexecuted_blocks=1 00:03:54.675 00:03:54.675 ' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.675 --rc genhtml_branch_coverage=1 00:03:54.675 --rc genhtml_function_coverage=1 00:03:54.675 --rc genhtml_legend=1 00:03:54.675 --rc geninfo_all_blocks=1 00:03:54.675 --rc geninfo_unexecuted_blocks=1 00:03:54.675 00:03:54.675 ' 00:03:54.675 13:13:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.675 13:13:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.675 13:13:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.675 ************************************ 00:03:54.675 START TEST env_memory 00:03:54.675 ************************************ 00:03:54.675 13:13:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.675 00:03:54.675 00:03:54.675 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.675 http://cunit.sourceforge.net/ 00:03:54.675 00:03:54.675 00:03:54.675 Suite: memory 00:03:54.934 Test: alloc and free memory map ...[2024-11-17 13:13:43.900415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:54.934 passed 00:03:54.934 Test: mem map translation ...[2024-11-17 13:13:43.933040] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:54.934 [2024-11-17 13:13:43.933077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:54.934 [2024-11-17 13:13:43.933145] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:54.934 [2024-11-17 13:13:43.933155] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:54.934 passed 00:03:54.934 Test: mem map registration ...[2024-11-17 13:13:44.005209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:54.934 [2024-11-17 13:13:44.005244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:54.934 passed 00:03:54.934 Test: mem map adjacent registrations ...passed 00:03:54.934 00:03:54.934 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.934 suites 1 1 n/a 0 0 00:03:54.934 tests 4 4 4 0 0 00:03:54.934 asserts 152 152 152 0 n/a 00:03:54.934 00:03:54.934 Elapsed time = 0.226 seconds 00:03:54.934 00:03:54.934 real 0m0.245s 00:03:54.934 user 0m0.229s 00:03:54.934 sys 0m0.012s 00:03:54.934 13:13:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.934 ************************************ 00:03:54.934 END TEST env_memory 00:03:54.934 13:13:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:54.934 ************************************ 00:03:54.934 13:13:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:54.934 13:13:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.934 13:13:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.934 13:13:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.934 ************************************ 00:03:54.934 START TEST env_vtophys 00:03:54.934 ************************************ 00:03:54.934 13:13:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.193 EAL: lib.eal log level changed from notice to debug 00:03:55.193 EAL: Detected lcore 0 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 1 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 2 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 3 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 4 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 5 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 6 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 7 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 8 as core 0 on socket 0 00:03:55.193 EAL: Detected lcore 9 as core 0 on socket 0 00:03:55.193 EAL: Maximum logical cores by configuration: 128 00:03:55.193 EAL: Detected CPU lcores: 10 00:03:55.193 EAL: Detected NUMA nodes: 1 00:03:55.193 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:55.193 EAL: Detected shared linkage of DPDK 00:03:55.193 EAL: No shared files mode enabled, IPC will be disabled 00:03:55.193 EAL: Selected IOVA mode 'PA' 00:03:55.193 EAL: Probing VFIO support... 00:03:55.193 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.193 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:55.193 EAL: Ask a virtual area of 0x2e000 bytes 00:03:55.193 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:55.193 EAL: Setting up physically contiguous memory... 00:03:55.193 EAL: Setting maximum number of open files to 524288 00:03:55.193 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:55.193 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:55.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.193 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:55.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.193 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:55.193 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:55.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.193 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:55.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.193 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:55.193 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:55.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.193 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:55.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.193 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:55.193 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:55.193 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.193 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:55.193 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.193 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.193 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:55.193 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:55.193 EAL: Hugepages will be freed exactly as allocated. 00:03:55.193 EAL: No shared files mode enabled, IPC is disabled 00:03:55.193 EAL: No shared files mode enabled, IPC is disabled 00:03:55.193 EAL: TSC frequency is ~2200000 KHz 00:03:55.193 EAL: Main lcore 0 is ready (tid=7f6b70c12a00;cpuset=[0]) 00:03:55.193 EAL: Trying to obtain current memory policy. 00:03:55.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.193 EAL: Restoring previous memory policy: 0 00:03:55.193 EAL: request: mp_malloc_sync 00:03:55.193 EAL: No shared files mode enabled, IPC is disabled 00:03:55.193 EAL: Heap on socket 0 was expanded by 2MB 00:03:55.193 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.193 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:55.193 EAL: Mem event callback 'spdk:(nil)' registered 00:03:55.193 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:55.193 00:03:55.193 00:03:55.193 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.193 http://cunit.sourceforge.net/ 00:03:55.194 00:03:55.194 00:03:55.194 Suite: components_suite 00:03:55.194 Test: vtophys_malloc_test ...passed 00:03:55.194 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 4MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was shrunk by 4MB 00:03:55.194 EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 6MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was shrunk by 6MB 00:03:55.194 EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 10MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was shrunk by 10MB 00:03:55.194 EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 18MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was shrunk by 18MB 00:03:55.194 EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 34MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was shrunk by 34MB 00:03:55.194 EAL: Trying to obtain current memory policy. 00:03:55.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.194 EAL: Restoring previous memory policy: 4 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.194 EAL: request: mp_malloc_sync 00:03:55.194 EAL: No shared files mode enabled, IPC is disabled 00:03:55.194 EAL: Heap on socket 0 was expanded by 66MB 00:03:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.453 EAL: request: mp_malloc_sync 00:03:55.453 EAL: No shared files mode enabled, IPC is disabled 00:03:55.453 EAL: Heap on socket 0 was shrunk by 66MB 00:03:55.453 EAL: Trying to obtain current memory policy. 00:03:55.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.453 EAL: Restoring previous memory policy: 4 00:03:55.453 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.453 EAL: request: mp_malloc_sync 00:03:55.453 EAL: No shared files mode enabled, IPC is disabled 00:03:55.453 EAL: Heap on socket 0 was expanded by 130MB 00:03:55.453 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.453 EAL: request: mp_malloc_sync 00:03:55.453 EAL: No shared files mode enabled, IPC is disabled 00:03:55.453 EAL: Heap on socket 0 was shrunk by 130MB 00:03:55.453 EAL: Trying to obtain current memory policy. 00:03:55.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.453 EAL: Restoring previous memory policy: 4 00:03:55.453 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.453 EAL: request: mp_malloc_sync 00:03:55.453 EAL: No shared files mode enabled, IPC is disabled 00:03:55.453 EAL: Heap on socket 0 was expanded by 258MB 00:03:55.712 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.712 EAL: request: mp_malloc_sync 00:03:55.712 EAL: No shared files mode enabled, IPC is disabled 00:03:55.712 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.712 EAL: Trying to obtain current memory policy. 00:03:55.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.971 EAL: Restoring previous memory policy: 4 00:03:55.971 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.971 EAL: request: mp_malloc_sync 00:03:55.971 EAL: No shared files mode enabled, IPC is disabled 00:03:55.971 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.971 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.229 EAL: request: mp_malloc_sync 00:03:56.229 EAL: No shared files mode enabled, IPC is disabled 00:03:56.229 EAL: Heap on socket 0 was shrunk by 514MB 00:03:56.229 EAL: Trying to obtain current memory policy. 00:03:56.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.488 EAL: Restoring previous memory policy: 4 00:03:56.488 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.488 EAL: request: mp_malloc_sync 00:03:56.488 EAL: No shared files mode enabled, IPC is disabled 00:03:56.488 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.746 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.004 passed 00:03:57.004 00:03:57.004 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.004 suites 1 1 n/a 0 0 00:03:57.004 tests 2 2 2 0 0 00:03:57.004 asserts 5554 5554 5554 0 n/a 00:03:57.004 00:03:57.004 Elapsed time = 1.797 seconds 00:03:57.004 EAL: request: mp_malloc_sync 00:03:57.004 EAL: No shared files mode enabled, IPC is disabled 00:03:57.004 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.004 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.004 EAL: request: mp_malloc_sync 00:03:57.004 EAL: No shared files mode enabled, IPC is disabled 00:03:57.004 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.004 EAL: No shared files mode enabled, IPC is disabled 00:03:57.004 EAL: No shared files mode enabled, IPC is disabled 00:03:57.004 EAL: No shared files mode enabled, IPC is disabled 00:03:57.004 00:03:57.004 real 0m2.015s 00:03:57.004 user 0m1.167s 00:03:57.004 sys 0m0.710s 00:03:57.004 13:13:46 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.004 ************************************ 00:03:57.004 END TEST env_vtophys 00:03:57.004 ************************************ 00:03:57.004 13:13:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.004 13:13:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:57.004 13:13:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.004 13:13:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.004 13:13:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.004 ************************************ 00:03:57.004 START TEST env_pci 00:03:57.004 ************************************ 00:03:57.004 13:13:46 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:57.263 00:03:57.263 00:03:57.263 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.263 http://cunit.sourceforge.net/ 00:03:57.263 00:03:57.263 00:03:57.263 Suite: pci 00:03:57.263 Test: pci_hook ...[2024-11-17 13:13:46.228475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56679 has claimed it 00:03:57.263 passed 00:03:57.263 00:03:57.263 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.263 suites 1 1 n/a 0 0 00:03:57.263 tests 1 1 1 0 0 00:03:57.263 asserts 25 25 25 0 n/a 00:03:57.263 00:03:57.263 Elapsed time = 0.002 seconds 00:03:57.263 EAL: Cannot find device (10000:00:01.0) 00:03:57.263 EAL: Failed to attach device on primary process 00:03:57.263 00:03:57.263 real 0m0.020s 00:03:57.263 user 0m0.008s 00:03:57.263 sys 0m0.012s 00:03:57.263 13:13:46 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.263 13:13:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.263 ************************************ 00:03:57.263 END TEST env_pci 00:03:57.263 ************************************ 00:03:57.263 13:13:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.263 13:13:46 env -- env/env.sh@15 -- # uname 00:03:57.263 13:13:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.263 13:13:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.263 13:13:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.263 13:13:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:57.263 13:13:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.263 13:13:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.263 ************************************ 00:03:57.263 START TEST env_dpdk_post_init 00:03:57.263 ************************************ 00:03:57.263 13:13:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.263 EAL: Detected CPU lcores: 10 00:03:57.263 EAL: Detected NUMA nodes: 1 00:03:57.263 EAL: Detected shared linkage of DPDK 00:03:57.263 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.263 EAL: Selected IOVA mode 'PA' 00:03:57.263 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.263 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:57.263 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:57.522 Starting DPDK initialization... 00:03:57.522 Starting SPDK post initialization... 00:03:57.522 SPDK NVMe probe 00:03:57.522 Attaching to 0000:00:10.0 00:03:57.522 Attaching to 0000:00:11.0 00:03:57.522 Attached to 0000:00:10.0 00:03:57.522 Attached to 0000:00:11.0 00:03:57.522 Cleaning up... 00:03:57.522 00:03:57.522 real 0m0.195s 00:03:57.522 user 0m0.057s 00:03:57.522 sys 0m0.037s 00:03:57.522 13:13:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.522 13:13:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.522 ************************************ 00:03:57.522 END TEST env_dpdk_post_init 00:03:57.522 ************************************ 00:03:57.522 13:13:46 env -- env/env.sh@26 -- # uname 00:03:57.522 13:13:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:57.522 13:13:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:57.522 13:13:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.522 13:13:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.522 13:13:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.522 ************************************ 00:03:57.522 START TEST env_mem_callbacks 00:03:57.522 ************************************ 00:03:57.522 13:13:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:57.522 EAL: Detected CPU lcores: 10 00:03:57.522 EAL: Detected NUMA nodes: 1 00:03:57.522 EAL: Detected shared linkage of DPDK 00:03:57.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.522 EAL: Selected IOVA mode 'PA' 00:03:57.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.522 00:03:57.522 00:03:57.522 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.522 http://cunit.sourceforge.net/ 00:03:57.522 00:03:57.522 00:03:57.522 Suite: memory 00:03:57.522 Test: test ... 00:03:57.522 register 0x200000200000 2097152 00:03:57.522 malloc 3145728 00:03:57.522 register 0x200000400000 4194304 00:03:57.522 buf 0x200000500000 len 3145728 PASSED 00:03:57.522 malloc 64 00:03:57.522 buf 0x2000004fff40 len 64 PASSED 00:03:57.522 malloc 4194304 00:03:57.522 register 0x200000800000 6291456 00:03:57.522 buf 0x200000a00000 len 4194304 PASSED 00:03:57.522 free 0x200000500000 3145728 00:03:57.522 free 0x2000004fff40 64 00:03:57.522 unregister 0x200000400000 4194304 PASSED 00:03:57.522 free 0x200000a00000 4194304 00:03:57.522 unregister 0x200000800000 6291456 PASSED 00:03:57.522 malloc 8388608 00:03:57.522 register 0x200000400000 10485760 00:03:57.522 buf 0x200000600000 len 8388608 PASSED 00:03:57.522 free 0x200000600000 8388608 00:03:57.522 unregister 0x200000400000 10485760 PASSED 00:03:57.522 passed 00:03:57.522 00:03:57.522 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.522 suites 1 1 n/a 0 0 00:03:57.522 tests 1 1 1 0 0 00:03:57.522 asserts 15 15 15 0 n/a 00:03:57.522 00:03:57.522 Elapsed time = 0.010 seconds 00:03:57.522 00:03:57.522 real 0m0.141s 00:03:57.522 user 0m0.015s 00:03:57.522 sys 0m0.025s 00:03:57.522 13:13:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.522 ************************************ 00:03:57.523 13:13:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:57.523 END TEST env_mem_callbacks 00:03:57.523 ************************************ 00:03:57.523 00:03:57.523 real 0m3.091s 00:03:57.523 user 0m1.678s 00:03:57.523 sys 0m1.059s 00:03:57.523 13:13:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.523 13:13:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.523 ************************************ 00:03:57.523 END TEST env 00:03:57.523 ************************************ 00:03:57.781 13:13:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.781 13:13:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.781 13:13:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.781 13:13:46 -- common/autotest_common.sh@10 -- # set +x 00:03:57.781 ************************************ 00:03:57.781 START TEST rpc 00:03:57.781 ************************************ 00:03:57.781 13:13:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.781 * Looking for test storage... 00:03:57.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.781 13:13:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.781 13:13:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.781 13:13:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.781 13:13:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.781 13:13:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.781 13:13:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.781 13:13:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.781 13:13:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.781 13:13:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.781 13:13:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.781 13:13:46 rpc -- scripts/common.sh@345 -- # : 1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.781 13:13:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.781 13:13:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.781 13:13:46 rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.781 13:13:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.781 13:13:46 rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.781 13:13:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.782 13:13:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.782 13:13:46 rpc -- scripts/common.sh@368 -- # return 0 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.782 --rc genhtml_branch_coverage=1 00:03:57.782 --rc genhtml_function_coverage=1 00:03:57.782 --rc genhtml_legend=1 00:03:57.782 --rc geninfo_all_blocks=1 00:03:57.782 --rc geninfo_unexecuted_blocks=1 00:03:57.782 00:03:57.782 ' 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.782 --rc genhtml_branch_coverage=1 00:03:57.782 --rc genhtml_function_coverage=1 00:03:57.782 --rc genhtml_legend=1 00:03:57.782 --rc geninfo_all_blocks=1 00:03:57.782 --rc geninfo_unexecuted_blocks=1 00:03:57.782 00:03:57.782 ' 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.782 --rc genhtml_branch_coverage=1 00:03:57.782 --rc genhtml_function_coverage=1 00:03:57.782 --rc genhtml_legend=1 00:03:57.782 --rc geninfo_all_blocks=1 00:03:57.782 --rc geninfo_unexecuted_blocks=1 00:03:57.782 00:03:57.782 ' 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.782 --rc genhtml_branch_coverage=1 00:03:57.782 --rc genhtml_function_coverage=1 00:03:57.782 --rc genhtml_legend=1 00:03:57.782 --rc geninfo_all_blocks=1 00:03:57.782 --rc geninfo_unexecuted_blocks=1 00:03:57.782 00:03:57.782 ' 00:03:57.782 13:13:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56796 00:03:57.782 13:13:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.782 13:13:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:57.782 13:13:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56796 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 56796 ']' 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.782 13:13:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.041 [2024-11-17 13:13:47.028888] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:03:58.041 [2024-11-17 13:13:47.029002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56796 ] 00:03:58.041 [2024-11-17 13:13:47.175466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.041 [2024-11-17 13:13:47.225306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.041 [2024-11-17 13:13:47.225367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56796' to capture a snapshot of events at runtime. 00:03:58.041 [2024-11-17 13:13:47.225377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.041 [2024-11-17 13:13:47.225385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.041 [2024-11-17 13:13:47.225391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56796 for offline analysis/debug. 00:03:58.041 [2024-11-17 13:13:47.225816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.299 [2024-11-17 13:13:47.317033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:03:58.559 13:13:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.559 13:13:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:58.559 13:13:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.559 13:13:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.559 13:13:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.559 13:13:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.559 13:13:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.559 13:13:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.559 13:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 ************************************ 00:03:58.559 START TEST rpc_integrity 00:03:58.559 ************************************ 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.559 { 00:03:58.559 "name": "Malloc0", 00:03:58.559 "aliases": [ 00:03:58.559 "31dad589-d5e0-498b-8f55-e5bdb7ac660b" 00:03:58.559 ], 00:03:58.559 "product_name": "Malloc disk", 00:03:58.559 "block_size": 512, 00:03:58.559 "num_blocks": 16384, 00:03:58.559 "uuid": "31dad589-d5e0-498b-8f55-e5bdb7ac660b", 00:03:58.559 "assigned_rate_limits": { 00:03:58.559 "rw_ios_per_sec": 0, 00:03:58.559 "rw_mbytes_per_sec": 0, 00:03:58.559 "r_mbytes_per_sec": 0, 00:03:58.559 "w_mbytes_per_sec": 0 00:03:58.559 }, 00:03:58.559 "claimed": false, 00:03:58.559 "zoned": false, 00:03:58.559 "supported_io_types": { 00:03:58.559 "read": true, 00:03:58.559 "write": true, 00:03:58.559 "unmap": true, 00:03:58.559 "flush": true, 00:03:58.559 "reset": true, 00:03:58.559 "nvme_admin": false, 00:03:58.559 "nvme_io": false, 00:03:58.559 "nvme_io_md": false, 00:03:58.559 "write_zeroes": true, 00:03:58.559 "zcopy": true, 00:03:58.559 "get_zone_info": false, 00:03:58.559 "zone_management": false, 00:03:58.559 "zone_append": false, 00:03:58.559 "compare": false, 00:03:58.559 "compare_and_write": false, 00:03:58.559 "abort": true, 00:03:58.559 "seek_hole": false, 00:03:58.559 "seek_data": false, 00:03:58.559 "copy": true, 00:03:58.559 "nvme_iov_md": false 00:03:58.559 }, 00:03:58.559 "memory_domains": [ 00:03:58.559 { 00:03:58.559 "dma_device_id": "system", 00:03:58.559 "dma_device_type": 1 00:03:58.559 }, 00:03:58.559 { 00:03:58.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.559 "dma_device_type": 2 00:03:58.559 } 00:03:58.559 ], 00:03:58.559 "driver_specific": {} 00:03:58.559 } 00:03:58.559 ]' 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.559 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.559 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.559 [2024-11-17 13:13:47.728812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:58.560 [2024-11-17 13:13:47.728858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.560 [2024-11-17 13:13:47.728877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1369f10 00:03:58.560 [2024-11-17 13:13:47.728887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.560 [2024-11-17 13:13:47.730368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.560 [2024-11-17 13:13:47.730398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.560 Passthru0 00:03:58.560 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.560 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.560 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.560 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.560 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.560 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.560 { 00:03:58.560 "name": "Malloc0", 00:03:58.560 "aliases": [ 00:03:58.560 "31dad589-d5e0-498b-8f55-e5bdb7ac660b" 00:03:58.560 ], 00:03:58.560 "product_name": "Malloc disk", 00:03:58.560 "block_size": 512, 00:03:58.560 "num_blocks": 16384, 00:03:58.560 "uuid": "31dad589-d5e0-498b-8f55-e5bdb7ac660b", 00:03:58.560 "assigned_rate_limits": { 00:03:58.560 "rw_ios_per_sec": 0, 00:03:58.560 "rw_mbytes_per_sec": 0, 00:03:58.560 "r_mbytes_per_sec": 0, 00:03:58.560 "w_mbytes_per_sec": 0 00:03:58.560 }, 00:03:58.560 "claimed": true, 00:03:58.560 "claim_type": "exclusive_write", 00:03:58.560 "zoned": false, 00:03:58.560 "supported_io_types": { 00:03:58.560 "read": true, 00:03:58.560 "write": true, 00:03:58.560 "unmap": true, 00:03:58.560 "flush": true, 00:03:58.560 "reset": true, 00:03:58.560 "nvme_admin": false, 00:03:58.560 "nvme_io": false, 00:03:58.560 "nvme_io_md": false, 00:03:58.560 "write_zeroes": true, 00:03:58.560 "zcopy": true, 00:03:58.560 "get_zone_info": false, 00:03:58.560 "zone_management": false, 00:03:58.560 "zone_append": false, 00:03:58.560 "compare": false, 00:03:58.560 "compare_and_write": false, 00:03:58.560 "abort": true, 00:03:58.560 "seek_hole": false, 00:03:58.560 "seek_data": false, 00:03:58.560 "copy": true, 00:03:58.560 "nvme_iov_md": false 00:03:58.560 }, 00:03:58.560 "memory_domains": [ 00:03:58.560 { 00:03:58.560 "dma_device_id": "system", 00:03:58.560 "dma_device_type": 1 00:03:58.560 }, 00:03:58.560 { 00:03:58.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.560 "dma_device_type": 2 00:03:58.560 } 00:03:58.560 ], 00:03:58.560 "driver_specific": {} 00:03:58.560 }, 00:03:58.560 { 00:03:58.560 "name": "Passthru0", 00:03:58.560 "aliases": [ 00:03:58.560 "9045c1a7-d081-5b4b-9053-c443f8f5f8ea" 00:03:58.560 ], 00:03:58.560 "product_name": "passthru", 00:03:58.560 "block_size": 512, 00:03:58.560 "num_blocks": 16384, 00:03:58.560 "uuid": "9045c1a7-d081-5b4b-9053-c443f8f5f8ea", 00:03:58.560 "assigned_rate_limits": { 00:03:58.560 "rw_ios_per_sec": 0, 00:03:58.560 "rw_mbytes_per_sec": 0, 00:03:58.560 "r_mbytes_per_sec": 0, 00:03:58.560 "w_mbytes_per_sec": 0 00:03:58.560 }, 00:03:58.560 "claimed": false, 00:03:58.560 "zoned": false, 00:03:58.560 "supported_io_types": { 00:03:58.560 "read": true, 00:03:58.560 "write": true, 00:03:58.560 "unmap": true, 00:03:58.560 "flush": true, 00:03:58.560 "reset": true, 00:03:58.560 "nvme_admin": false, 00:03:58.560 "nvme_io": false, 00:03:58.560 "nvme_io_md": false, 00:03:58.560 "write_zeroes": true, 00:03:58.560 "zcopy": true, 00:03:58.560 "get_zone_info": false, 00:03:58.560 "zone_management": false, 00:03:58.560 "zone_append": false, 00:03:58.560 "compare": false, 00:03:58.560 "compare_and_write": false, 00:03:58.560 "abort": true, 00:03:58.560 "seek_hole": false, 00:03:58.560 "seek_data": false, 00:03:58.560 "copy": true, 00:03:58.560 "nvme_iov_md": false 00:03:58.560 }, 00:03:58.560 "memory_domains": [ 00:03:58.560 { 00:03:58.560 "dma_device_id": "system", 00:03:58.560 "dma_device_type": 1 00:03:58.560 }, 00:03:58.560 { 00:03:58.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.560 "dma_device_type": 2 00:03:58.560 } 00:03:58.560 ], 00:03:58.560 "driver_specific": { 00:03:58.560 "passthru": { 00:03:58.560 "name": "Passthru0", 00:03:58.560 "base_bdev_name": "Malloc0" 00:03:58.560 } 00:03:58.560 } 00:03:58.560 } 00:03:58.560 ]' 00:03:58.560 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.819 13:13:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.819 00:03:58.819 real 0m0.329s 00:03:58.819 user 0m0.219s 00:03:58.819 sys 0m0.042s 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.819 ************************************ 00:03:58.819 END TEST rpc_integrity 00:03:58.819 13:13:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 ************************************ 00:03:58.819 13:13:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.819 13:13:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.819 13:13:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.819 13:13:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 ************************************ 00:03:58.819 START TEST rpc_plugins 00:03:58.819 ************************************ 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:58.819 13:13:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.819 13:13:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:58.819 13:13:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.819 13:13:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.819 13:13:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:58.819 { 00:03:58.819 "name": "Malloc1", 00:03:58.819 "aliases": [ 00:03:58.819 "b1f61c63-92ed-4bfe-ab63-03bc28abbd36" 00:03:58.819 ], 00:03:58.819 "product_name": "Malloc disk", 00:03:58.819 "block_size": 4096, 00:03:58.819 "num_blocks": 256, 00:03:58.819 "uuid": "b1f61c63-92ed-4bfe-ab63-03bc28abbd36", 00:03:58.819 "assigned_rate_limits": { 00:03:58.819 "rw_ios_per_sec": 0, 00:03:58.819 "rw_mbytes_per_sec": 0, 00:03:58.819 "r_mbytes_per_sec": 0, 00:03:58.819 "w_mbytes_per_sec": 0 00:03:58.819 }, 00:03:58.819 "claimed": false, 00:03:58.819 "zoned": false, 00:03:58.819 "supported_io_types": { 00:03:58.819 "read": true, 00:03:58.819 "write": true, 00:03:58.819 "unmap": true, 00:03:58.819 "flush": true, 00:03:58.819 "reset": true, 00:03:58.819 "nvme_admin": false, 00:03:58.819 "nvme_io": false, 00:03:58.819 "nvme_io_md": false, 00:03:58.819 "write_zeroes": true, 00:03:58.819 "zcopy": true, 00:03:58.819 "get_zone_info": false, 00:03:58.819 "zone_management": false, 00:03:58.819 "zone_append": false, 00:03:58.819 "compare": false, 00:03:58.819 "compare_and_write": false, 00:03:58.819 "abort": true, 00:03:58.819 "seek_hole": false, 00:03:58.819 "seek_data": false, 00:03:58.819 "copy": true, 00:03:58.819 "nvme_iov_md": false 00:03:58.819 }, 00:03:58.819 "memory_domains": [ 00:03:58.819 { 00:03:58.819 "dma_device_id": "system", 00:03:58.819 "dma_device_type": 1 00:03:58.819 }, 00:03:58.819 { 00:03:58.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.819 "dma_device_type": 2 00:03:58.819 } 00:03:58.819 ], 00:03:58.819 "driver_specific": {} 00:03:58.819 } 00:03:58.819 ]' 00:03:58.819 13:13:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:59.078 13:13:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:59.078 00:03:59.078 real 0m0.156s 00:03:59.078 user 0m0.106s 00:03:59.078 sys 0m0.016s 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.078 ************************************ 00:03:59.078 END TEST rpc_plugins 00:03:59.078 13:13:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.078 ************************************ 00:03:59.078 13:13:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:59.078 13:13:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.078 13:13:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.078 13:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.078 ************************************ 00:03:59.078 START TEST rpc_trace_cmd_test 00:03:59.078 ************************************ 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:59.078 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56796", 00:03:59.078 "tpoint_group_mask": "0x8", 00:03:59.078 "iscsi_conn": { 00:03:59.078 "mask": "0x2", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "scsi": { 00:03:59.078 "mask": "0x4", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "bdev": { 00:03:59.078 "mask": "0x8", 00:03:59.078 "tpoint_mask": "0xffffffffffffffff" 00:03:59.078 }, 00:03:59.078 "nvmf_rdma": { 00:03:59.078 "mask": "0x10", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "nvmf_tcp": { 00:03:59.078 "mask": "0x20", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "ftl": { 00:03:59.078 "mask": "0x40", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "blobfs": { 00:03:59.078 "mask": "0x80", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "dsa": { 00:03:59.078 "mask": "0x200", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "thread": { 00:03:59.078 "mask": "0x400", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "nvme_pcie": { 00:03:59.078 "mask": "0x800", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "iaa": { 00:03:59.078 "mask": "0x1000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "nvme_tcp": { 00:03:59.078 "mask": "0x2000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "bdev_nvme": { 00:03:59.078 "mask": "0x4000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "sock": { 00:03:59.078 "mask": "0x8000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "blob": { 00:03:59.078 "mask": "0x10000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "bdev_raid": { 00:03:59.078 "mask": "0x20000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 }, 00:03:59.078 "scheduler": { 00:03:59.078 "mask": "0x40000", 00:03:59.078 "tpoint_mask": "0x0" 00:03:59.078 } 00:03:59.078 }' 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:59.078 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:59.337 00:03:59.337 real 0m0.271s 00:03:59.337 user 0m0.225s 00:03:59.337 sys 0m0.035s 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.337 13:13:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.337 ************************************ 00:03:59.337 END TEST rpc_trace_cmd_test 00:03:59.337 ************************************ 00:03:59.337 13:13:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:59.337 13:13:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:59.337 13:13:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:59.337 13:13:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.337 13:13:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.337 13:13:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.337 ************************************ 00:03:59.337 START TEST rpc_daemon_integrity 00:03:59.337 ************************************ 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.337 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.596 { 00:03:59.596 "name": "Malloc2", 00:03:59.596 "aliases": [ 00:03:59.596 "911b8dee-f94d-42f4-bec0-f3974948ea66" 00:03:59.596 ], 00:03:59.596 "product_name": "Malloc disk", 00:03:59.596 "block_size": 512, 00:03:59.596 "num_blocks": 16384, 00:03:59.596 "uuid": "911b8dee-f94d-42f4-bec0-f3974948ea66", 00:03:59.596 "assigned_rate_limits": { 00:03:59.596 "rw_ios_per_sec": 0, 00:03:59.596 "rw_mbytes_per_sec": 0, 00:03:59.596 "r_mbytes_per_sec": 0, 00:03:59.596 "w_mbytes_per_sec": 0 00:03:59.596 }, 00:03:59.596 "claimed": false, 00:03:59.596 "zoned": false, 00:03:59.596 "supported_io_types": { 00:03:59.596 "read": true, 00:03:59.596 "write": true, 00:03:59.596 "unmap": true, 00:03:59.596 "flush": true, 00:03:59.596 "reset": true, 00:03:59.596 "nvme_admin": false, 00:03:59.596 "nvme_io": false, 00:03:59.596 "nvme_io_md": false, 00:03:59.596 "write_zeroes": true, 00:03:59.596 "zcopy": true, 00:03:59.596 "get_zone_info": false, 00:03:59.596 "zone_management": false, 00:03:59.596 "zone_append": false, 00:03:59.596 "compare": false, 00:03:59.596 "compare_and_write": false, 00:03:59.596 "abort": true, 00:03:59.596 "seek_hole": false, 00:03:59.596 "seek_data": false, 00:03:59.596 "copy": true, 00:03:59.596 "nvme_iov_md": false 00:03:59.596 }, 00:03:59.596 "memory_domains": [ 00:03:59.596 { 00:03:59.596 "dma_device_id": "system", 00:03:59.596 "dma_device_type": 1 00:03:59.596 }, 00:03:59.596 { 00:03:59.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.596 "dma_device_type": 2 00:03:59.596 } 00:03:59.596 ], 00:03:59.596 "driver_specific": {} 00:03:59.596 } 00:03:59.596 ]' 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.596 [2024-11-17 13:13:48.654741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.596 [2024-11-17 13:13:48.654821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.596 [2024-11-17 13:13:48.654838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1504980 00:03:59.596 [2024-11-17 13:13:48.654848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.596 [2024-11-17 13:13:48.656314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.596 [2024-11-17 13:13:48.656341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.596 Passthru0 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.596 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.596 { 00:03:59.596 "name": "Malloc2", 00:03:59.596 "aliases": [ 00:03:59.596 "911b8dee-f94d-42f4-bec0-f3974948ea66" 00:03:59.596 ], 00:03:59.596 "product_name": "Malloc disk", 00:03:59.596 "block_size": 512, 00:03:59.596 "num_blocks": 16384, 00:03:59.596 "uuid": "911b8dee-f94d-42f4-bec0-f3974948ea66", 00:03:59.596 "assigned_rate_limits": { 00:03:59.596 "rw_ios_per_sec": 0, 00:03:59.596 "rw_mbytes_per_sec": 0, 00:03:59.596 "r_mbytes_per_sec": 0, 00:03:59.596 "w_mbytes_per_sec": 0 00:03:59.596 }, 00:03:59.596 "claimed": true, 00:03:59.596 "claim_type": "exclusive_write", 00:03:59.596 "zoned": false, 00:03:59.596 "supported_io_types": { 00:03:59.596 "read": true, 00:03:59.596 "write": true, 00:03:59.596 "unmap": true, 00:03:59.596 "flush": true, 00:03:59.596 "reset": true, 00:03:59.596 "nvme_admin": false, 00:03:59.596 "nvme_io": false, 00:03:59.596 "nvme_io_md": false, 00:03:59.596 "write_zeroes": true, 00:03:59.596 "zcopy": true, 00:03:59.596 "get_zone_info": false, 00:03:59.596 "zone_management": false, 00:03:59.596 "zone_append": false, 00:03:59.596 "compare": false, 00:03:59.596 "compare_and_write": false, 00:03:59.596 "abort": true, 00:03:59.596 "seek_hole": false, 00:03:59.596 "seek_data": false, 00:03:59.596 "copy": true, 00:03:59.596 "nvme_iov_md": false 00:03:59.596 }, 00:03:59.596 "memory_domains": [ 00:03:59.596 { 00:03:59.596 "dma_device_id": "system", 00:03:59.596 "dma_device_type": 1 00:03:59.596 }, 00:03:59.596 { 00:03:59.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.596 "dma_device_type": 2 00:03:59.596 } 00:03:59.596 ], 00:03:59.596 "driver_specific": {} 00:03:59.596 }, 00:03:59.596 { 00:03:59.596 "name": "Passthru0", 00:03:59.596 "aliases": [ 00:03:59.596 "30dc964e-da77-5564-8661-e160190a083c" 00:03:59.596 ], 00:03:59.596 "product_name": "passthru", 00:03:59.596 "block_size": 512, 00:03:59.596 "num_blocks": 16384, 00:03:59.596 "uuid": "30dc964e-da77-5564-8661-e160190a083c", 00:03:59.596 "assigned_rate_limits": { 00:03:59.596 "rw_ios_per_sec": 0, 00:03:59.596 "rw_mbytes_per_sec": 0, 00:03:59.596 "r_mbytes_per_sec": 0, 00:03:59.596 "w_mbytes_per_sec": 0 00:03:59.596 }, 00:03:59.596 "claimed": false, 00:03:59.596 "zoned": false, 00:03:59.596 "supported_io_types": { 00:03:59.596 "read": true, 00:03:59.596 "write": true, 00:03:59.596 "unmap": true, 00:03:59.596 "flush": true, 00:03:59.596 "reset": true, 00:03:59.596 "nvme_admin": false, 00:03:59.596 "nvme_io": false, 00:03:59.596 "nvme_io_md": false, 00:03:59.596 "write_zeroes": true, 00:03:59.596 "zcopy": true, 00:03:59.596 "get_zone_info": false, 00:03:59.596 "zone_management": false, 00:03:59.596 "zone_append": false, 00:03:59.596 "compare": false, 00:03:59.596 "compare_and_write": false, 00:03:59.596 "abort": true, 00:03:59.596 "seek_hole": false, 00:03:59.596 "seek_data": false, 00:03:59.596 "copy": true, 00:03:59.596 "nvme_iov_md": false 00:03:59.596 }, 00:03:59.596 "memory_domains": [ 00:03:59.596 { 00:03:59.596 "dma_device_id": "system", 00:03:59.596 "dma_device_type": 1 00:03:59.596 }, 00:03:59.596 { 00:03:59.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.596 "dma_device_type": 2 00:03:59.596 } 00:03:59.596 ], 00:03:59.596 "driver_specific": { 00:03:59.597 "passthru": { 00:03:59.597 "name": "Passthru0", 00:03:59.597 "base_bdev_name": "Malloc2" 00:03:59.597 } 00:03:59.597 } 00:03:59.597 } 00:03:59.597 ]' 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.597 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.855 13:13:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.855 00:03:59.855 real 0m0.327s 00:03:59.855 user 0m0.228s 00:03:59.855 sys 0m0.033s 00:03:59.855 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.855 13:13:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.855 ************************************ 00:03:59.855 END TEST rpc_daemon_integrity 00:03:59.855 ************************************ 00:03:59.855 13:13:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:59.855 13:13:48 rpc -- rpc/rpc.sh@84 -- # killprocess 56796 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 56796 ']' 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@958 -- # kill -0 56796 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@959 -- # uname 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56796 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.855 killing process with pid 56796 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56796' 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@973 -- # kill 56796 00:03:59.855 13:13:48 rpc -- common/autotest_common.sh@978 -- # wait 56796 00:04:00.426 00:04:00.426 real 0m2.650s 00:04:00.426 user 0m3.223s 00:04:00.426 sys 0m0.770s 00:04:00.426 13:13:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.426 13:13:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 ************************************ 00:04:00.426 END TEST rpc 00:04:00.426 ************************************ 00:04:00.426 13:13:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:00.426 13:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.426 13:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.426 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 ************************************ 00:04:00.426 START TEST skip_rpc 00:04:00.426 ************************************ 00:04:00.426 13:13:49 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:00.426 * Looking for test storage... 00:04:00.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.426 13:13:49 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.426 13:13:49 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.426 13:13:49 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.684 13:13:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.684 --rc genhtml_branch_coverage=1 00:04:00.684 --rc genhtml_function_coverage=1 00:04:00.684 --rc genhtml_legend=1 00:04:00.684 --rc geninfo_all_blocks=1 00:04:00.684 --rc geninfo_unexecuted_blocks=1 00:04:00.684 00:04:00.684 ' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.684 --rc genhtml_branch_coverage=1 00:04:00.684 --rc genhtml_function_coverage=1 00:04:00.684 --rc genhtml_legend=1 00:04:00.684 --rc geninfo_all_blocks=1 00:04:00.684 --rc geninfo_unexecuted_blocks=1 00:04:00.684 00:04:00.684 ' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.684 --rc genhtml_branch_coverage=1 00:04:00.684 --rc genhtml_function_coverage=1 00:04:00.684 --rc genhtml_legend=1 00:04:00.684 --rc geninfo_all_blocks=1 00:04:00.684 --rc geninfo_unexecuted_blocks=1 00:04:00.684 00:04:00.684 ' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.684 --rc genhtml_branch_coverage=1 00:04:00.684 --rc genhtml_function_coverage=1 00:04:00.684 --rc genhtml_legend=1 00:04:00.684 --rc geninfo_all_blocks=1 00:04:00.684 --rc geninfo_unexecuted_blocks=1 00:04:00.684 00:04:00.684 ' 00:04:00.684 13:13:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:00.684 13:13:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:00.684 13:13:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.684 13:13:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.684 ************************************ 00:04:00.684 START TEST skip_rpc 00:04:00.684 ************************************ 00:04:00.684 13:13:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:00.684 13:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57000 00:04:00.684 13:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.684 13:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:00.684 13:13:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:00.684 [2024-11-17 13:13:49.776734] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:00.684 [2024-11-17 13:13:49.776868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57000 ] 00:04:00.942 [2024-11-17 13:13:49.926749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.942 [2024-11-17 13:13:49.987591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.942 [2024-11-17 13:13:50.067775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57000 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57000 ']' 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57000 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57000 00:04:06.218 killing process with pid 57000 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57000' 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57000 00:04:06.218 13:13:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57000 00:04:06.218 00:04:06.218 real 0m5.413s 00:04:06.218 user 0m5.021s 00:04:06.218 sys 0m0.315s 00:04:06.218 13:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.218 ************************************ 00:04:06.218 END TEST skip_rpc 00:04:06.218 ************************************ 00:04:06.218 13:13:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 13:13:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:06.218 13:13:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.218 13:13:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.218 13:13:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 ************************************ 00:04:06.218 START TEST skip_rpc_with_json 00:04:06.218 ************************************ 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57081 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57081 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57081 ']' 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.218 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 [2024-11-17 13:13:55.219614] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:06.218 [2024-11-17 13:13:55.219882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57081 ] 00:04:06.218 [2024-11-17 13:13:55.349839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.218 [2024-11-17 13:13:55.395457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.485 [2024-11-17 13:13:55.460549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 [2024-11-17 13:13:55.641676] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:06.485 request: 00:04:06.485 { 00:04:06.485 "trtype": "tcp", 00:04:06.485 "method": "nvmf_get_transports", 00:04:06.485 "req_id": 1 00:04:06.485 } 00:04:06.485 Got JSON-RPC error response 00:04:06.485 response: 00:04:06.485 { 00:04:06.485 "code": -19, 00:04:06.485 "message": "No such device" 00:04:06.485 } 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.485 [2024-11-17 13:13:55.649803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.485 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.744 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.744 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.744 { 00:04:06.744 "subsystems": [ 00:04:06.744 { 00:04:06.744 "subsystem": "fsdev", 00:04:06.744 "config": [ 00:04:06.744 { 00:04:06.744 "method": "fsdev_set_opts", 00:04:06.744 "params": { 00:04:06.744 "fsdev_io_pool_size": 65535, 00:04:06.744 "fsdev_io_cache_size": 256 00:04:06.744 } 00:04:06.744 } 00:04:06.744 ] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "keyring", 00:04:06.744 "config": [] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "iobuf", 00:04:06.744 "config": [ 00:04:06.744 { 00:04:06.744 "method": "iobuf_set_options", 00:04:06.744 "params": { 00:04:06.744 "small_pool_count": 8192, 00:04:06.744 "large_pool_count": 1024, 00:04:06.744 "small_bufsize": 8192, 00:04:06.744 "large_bufsize": 135168, 00:04:06.744 "enable_numa": false 00:04:06.744 } 00:04:06.744 } 00:04:06.744 ] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "sock", 00:04:06.744 "config": [ 00:04:06.744 { 00:04:06.744 "method": "sock_set_default_impl", 00:04:06.744 "params": { 00:04:06.744 "impl_name": "uring" 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "sock_impl_set_options", 00:04:06.744 "params": { 00:04:06.744 "impl_name": "ssl", 00:04:06.744 "recv_buf_size": 4096, 00:04:06.744 "send_buf_size": 4096, 00:04:06.744 "enable_recv_pipe": true, 00:04:06.744 "enable_quickack": false, 00:04:06.744 "enable_placement_id": 0, 00:04:06.744 "enable_zerocopy_send_server": true, 00:04:06.744 "enable_zerocopy_send_client": false, 00:04:06.744 "zerocopy_threshold": 0, 00:04:06.744 "tls_version": 0, 00:04:06.744 "enable_ktls": false 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "sock_impl_set_options", 00:04:06.744 "params": { 00:04:06.744 "impl_name": "posix", 00:04:06.744 "recv_buf_size": 2097152, 00:04:06.744 "send_buf_size": 2097152, 00:04:06.744 "enable_recv_pipe": true, 00:04:06.744 "enable_quickack": false, 00:04:06.744 "enable_placement_id": 0, 00:04:06.744 "enable_zerocopy_send_server": true, 00:04:06.744 "enable_zerocopy_send_client": false, 00:04:06.744 "zerocopy_threshold": 0, 00:04:06.744 "tls_version": 0, 00:04:06.744 "enable_ktls": false 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "sock_impl_set_options", 00:04:06.744 "params": { 00:04:06.744 "impl_name": "uring", 00:04:06.744 "recv_buf_size": 2097152, 00:04:06.744 "send_buf_size": 2097152, 00:04:06.744 "enable_recv_pipe": true, 00:04:06.744 "enable_quickack": false, 00:04:06.744 "enable_placement_id": 0, 00:04:06.744 "enable_zerocopy_send_server": false, 00:04:06.744 "enable_zerocopy_send_client": false, 00:04:06.744 "zerocopy_threshold": 0, 00:04:06.744 "tls_version": 0, 00:04:06.744 "enable_ktls": false 00:04:06.744 } 00:04:06.744 } 00:04:06.744 ] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "vmd", 00:04:06.744 "config": [] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "accel", 00:04:06.744 "config": [ 00:04:06.744 { 00:04:06.744 "method": "accel_set_options", 00:04:06.744 "params": { 00:04:06.744 "small_cache_size": 128, 00:04:06.744 "large_cache_size": 16, 00:04:06.744 "task_count": 2048, 00:04:06.744 "sequence_count": 2048, 00:04:06.744 "buf_count": 2048 00:04:06.744 } 00:04:06.744 } 00:04:06.744 ] 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "subsystem": "bdev", 00:04:06.744 "config": [ 00:04:06.744 { 00:04:06.744 "method": "bdev_set_options", 00:04:06.744 "params": { 00:04:06.744 "bdev_io_pool_size": 65535, 00:04:06.744 "bdev_io_cache_size": 256, 00:04:06.744 "bdev_auto_examine": true, 00:04:06.744 "iobuf_small_cache_size": 128, 00:04:06.744 "iobuf_large_cache_size": 16 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "bdev_raid_set_options", 00:04:06.744 "params": { 00:04:06.744 "process_window_size_kb": 1024, 00:04:06.744 "process_max_bandwidth_mb_sec": 0 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "bdev_iscsi_set_options", 00:04:06.744 "params": { 00:04:06.744 "timeout_sec": 30 00:04:06.744 } 00:04:06.744 }, 00:04:06.744 { 00:04:06.744 "method": "bdev_nvme_set_options", 00:04:06.744 "params": { 00:04:06.744 "action_on_timeout": "none", 00:04:06.744 "timeout_us": 0, 00:04:06.744 "timeout_admin_us": 0, 00:04:06.744 "keep_alive_timeout_ms": 10000, 00:04:06.744 "arbitration_burst": 0, 00:04:06.744 "low_priority_weight": 0, 00:04:06.744 "medium_priority_weight": 0, 00:04:06.744 "high_priority_weight": 0, 00:04:06.744 "nvme_adminq_poll_period_us": 10000, 00:04:06.744 "nvme_ioq_poll_period_us": 0, 00:04:06.744 "io_queue_requests": 0, 00:04:06.744 "delay_cmd_submit": true, 00:04:06.744 "transport_retry_count": 4, 00:04:06.744 "bdev_retry_count": 3, 00:04:06.745 "transport_ack_timeout": 0, 00:04:06.745 "ctrlr_loss_timeout_sec": 0, 00:04:06.745 "reconnect_delay_sec": 0, 00:04:06.745 "fast_io_fail_timeout_sec": 0, 00:04:06.745 "disable_auto_failback": false, 00:04:06.745 "generate_uuids": false, 00:04:06.745 "transport_tos": 0, 00:04:06.745 "nvme_error_stat": false, 00:04:06.745 "rdma_srq_size": 0, 00:04:06.745 "io_path_stat": false, 00:04:06.745 "allow_accel_sequence": false, 00:04:06.745 "rdma_max_cq_size": 0, 00:04:06.745 "rdma_cm_event_timeout_ms": 0, 00:04:06.745 "dhchap_digests": [ 00:04:06.745 "sha256", 00:04:06.745 "sha384", 00:04:06.745 "sha512" 00:04:06.745 ], 00:04:06.745 "dhchap_dhgroups": [ 00:04:06.745 "null", 00:04:06.745 "ffdhe2048", 00:04:06.745 "ffdhe3072", 00:04:06.745 "ffdhe4096", 00:04:06.745 "ffdhe6144", 00:04:06.745 "ffdhe8192" 00:04:06.745 ] 00:04:06.745 } 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "method": "bdev_nvme_set_hotplug", 00:04:06.745 "params": { 00:04:06.745 "period_us": 100000, 00:04:06.745 "enable": false 00:04:06.745 } 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "method": "bdev_wait_for_examine" 00:04:06.745 } 00:04:06.745 ] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "scsi", 00:04:06.745 "config": null 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "scheduler", 00:04:06.745 "config": [ 00:04:06.745 { 00:04:06.745 "method": "framework_set_scheduler", 00:04:06.745 "params": { 00:04:06.745 "name": "static" 00:04:06.745 } 00:04:06.745 } 00:04:06.745 ] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "vhost_scsi", 00:04:06.745 "config": [] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "vhost_blk", 00:04:06.745 "config": [] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "ublk", 00:04:06.745 "config": [] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "nbd", 00:04:06.745 "config": [] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "nvmf", 00:04:06.745 "config": [ 00:04:06.745 { 00:04:06.745 "method": "nvmf_set_config", 00:04:06.745 "params": { 00:04:06.745 "discovery_filter": "match_any", 00:04:06.745 "admin_cmd_passthru": { 00:04:06.745 "identify_ctrlr": false 00:04:06.745 }, 00:04:06.745 "dhchap_digests": [ 00:04:06.745 "sha256", 00:04:06.745 "sha384", 00:04:06.745 "sha512" 00:04:06.745 ], 00:04:06.745 "dhchap_dhgroups": [ 00:04:06.745 "null", 00:04:06.745 "ffdhe2048", 00:04:06.745 "ffdhe3072", 00:04:06.745 "ffdhe4096", 00:04:06.745 "ffdhe6144", 00:04:06.745 "ffdhe8192" 00:04:06.745 ] 00:04:06.745 } 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "method": "nvmf_set_max_subsystems", 00:04:06.745 "params": { 00:04:06.745 "max_subsystems": 1024 00:04:06.745 } 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "method": "nvmf_set_crdt", 00:04:06.745 "params": { 00:04:06.745 "crdt1": 0, 00:04:06.745 "crdt2": 0, 00:04:06.745 "crdt3": 0 00:04:06.745 } 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "method": "nvmf_create_transport", 00:04:06.745 "params": { 00:04:06.745 "trtype": "TCP", 00:04:06.745 "max_queue_depth": 128, 00:04:06.745 "max_io_qpairs_per_ctrlr": 127, 00:04:06.745 "in_capsule_data_size": 4096, 00:04:06.745 "max_io_size": 131072, 00:04:06.745 "io_unit_size": 131072, 00:04:06.745 "max_aq_depth": 128, 00:04:06.745 "num_shared_buffers": 511, 00:04:06.745 "buf_cache_size": 4294967295, 00:04:06.745 "dif_insert_or_strip": false, 00:04:06.745 "zcopy": false, 00:04:06.745 "c2h_success": true, 00:04:06.745 "sock_priority": 0, 00:04:06.745 "abort_timeout_sec": 1, 00:04:06.745 "ack_timeout": 0, 00:04:06.745 "data_wr_pool_size": 0 00:04:06.745 } 00:04:06.745 } 00:04:06.745 ] 00:04:06.745 }, 00:04:06.745 { 00:04:06.745 "subsystem": "iscsi", 00:04:06.745 "config": [ 00:04:06.745 { 00:04:06.745 "method": "iscsi_set_options", 00:04:06.745 "params": { 00:04:06.745 "node_base": "iqn.2016-06.io.spdk", 00:04:06.745 "max_sessions": 128, 00:04:06.745 "max_connections_per_session": 2, 00:04:06.745 "max_queue_depth": 64, 00:04:06.745 "default_time2wait": 2, 00:04:06.745 "default_time2retain": 20, 00:04:06.745 "first_burst_length": 8192, 00:04:06.745 "immediate_data": true, 00:04:06.745 "allow_duplicated_isid": false, 00:04:06.745 "error_recovery_level": 0, 00:04:06.745 "nop_timeout": 60, 00:04:06.745 "nop_in_interval": 30, 00:04:06.745 "disable_chap": false, 00:04:06.745 "require_chap": false, 00:04:06.745 "mutual_chap": false, 00:04:06.745 "chap_group": 0, 00:04:06.745 "max_large_datain_per_connection": 64, 00:04:06.745 "max_r2t_per_connection": 4, 00:04:06.745 "pdu_pool_size": 36864, 00:04:06.745 "immediate_data_pool_size": 16384, 00:04:06.745 "data_out_pool_size": 2048 00:04:06.745 } 00:04:06.745 } 00:04:06.745 ] 00:04:06.745 } 00:04:06.745 ] 00:04:06.745 } 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57081 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57081 ']' 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57081 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57081 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.745 killing process with pid 57081 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57081' 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57081 00:04:06.745 13:13:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57081 00:04:07.312 13:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57101 00:04:07.312 13:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:07.312 13:13:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57101 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57101 ']' 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57101 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57101 00:04:12.578 killing process with pid 57101 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57101' 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57101 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57101 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.578 00:04:12.578 real 0m6.473s 00:04:12.578 user 0m6.010s 00:04:12.578 sys 0m0.605s 00:04:12.578 ************************************ 00:04:12.578 END TEST skip_rpc_with_json 00:04:12.578 ************************************ 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 13:14:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.578 13:14:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.578 13:14:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.578 13:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 ************************************ 00:04:12.578 START TEST skip_rpc_with_delay 00:04:12.578 ************************************ 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.578 [2024-11-17 13:14:01.765160] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.578 ************************************ 00:04:12.578 END TEST skip_rpc_with_delay 00:04:12.578 ************************************ 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.578 00:04:12.578 real 0m0.092s 00:04:12.578 user 0m0.059s 00:04:12.578 sys 0m0.033s 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.578 13:14:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 13:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.837 13:14:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.837 13:14:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.837 13:14:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.837 13:14:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.837 13:14:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 ************************************ 00:04:12.837 START TEST exit_on_failed_rpc_init 00:04:12.837 ************************************ 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57211 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57211 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57211 ']' 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.837 13:14:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 [2024-11-17 13:14:01.911107] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:12.837 [2024-11-17 13:14:01.911214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57211 ] 00:04:12.837 [2024-11-17 13:14:02.055074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.096 [2024-11-17 13:14:02.095133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.096 [2024-11-17 13:14:02.158571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:13.354 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.354 [2024-11-17 13:14:02.403863] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:13.354 [2024-11-17 13:14:02.403956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57221 ] 00:04:13.354 [2024-11-17 13:14:02.544969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.613 [2024-11-17 13:14:02.588913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.613 [2024-11-17 13:14:02.589261] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:13.613 [2024-11-17 13:14:02.589463] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:13.613 [2024-11-17 13:14:02.589556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57211 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57211 ']' 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57211 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57211 00:04:13.613 killing process with pid 57211 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57211' 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57211 00:04:13.613 13:14:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57211 00:04:13.872 ************************************ 00:04:13.872 END TEST exit_on_failed_rpc_init 00:04:13.872 ************************************ 00:04:13.872 00:04:13.872 real 0m1.220s 00:04:13.872 user 0m1.248s 00:04:13.872 sys 0m0.368s 00:04:13.872 13:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.872 13:14:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.130 13:14:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.130 ************************************ 00:04:14.130 END TEST skip_rpc 00:04:14.130 ************************************ 00:04:14.130 00:04:14.130 real 0m13.606s 00:04:14.130 user 0m12.528s 00:04:14.130 sys 0m1.533s 00:04:14.130 13:14:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.130 13:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.130 13:14:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:14.130 13:14:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.130 13:14:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.130 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.130 ************************************ 00:04:14.130 START TEST rpc_client 00:04:14.130 ************************************ 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:14.130 * Looking for test storage... 00:04:14.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.130 13:14:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.130 13:14:03 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.131 --rc genhtml_branch_coverage=1 00:04:14.131 --rc genhtml_function_coverage=1 00:04:14.131 --rc genhtml_legend=1 00:04:14.131 --rc geninfo_all_blocks=1 00:04:14.131 --rc geninfo_unexecuted_blocks=1 00:04:14.131 00:04:14.131 ' 00:04:14.131 13:14:03 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.131 --rc genhtml_branch_coverage=1 00:04:14.131 --rc genhtml_function_coverage=1 00:04:14.131 --rc genhtml_legend=1 00:04:14.131 --rc geninfo_all_blocks=1 00:04:14.131 --rc geninfo_unexecuted_blocks=1 00:04:14.131 00:04:14.131 ' 00:04:14.131 13:14:03 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.131 --rc genhtml_branch_coverage=1 00:04:14.131 --rc genhtml_function_coverage=1 00:04:14.131 --rc genhtml_legend=1 00:04:14.131 --rc geninfo_all_blocks=1 00:04:14.131 --rc geninfo_unexecuted_blocks=1 00:04:14.131 00:04:14.131 ' 00:04:14.131 13:14:03 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.131 --rc genhtml_branch_coverage=1 00:04:14.131 --rc genhtml_function_coverage=1 00:04:14.131 --rc genhtml_legend=1 00:04:14.131 --rc geninfo_all_blocks=1 00:04:14.131 --rc geninfo_unexecuted_blocks=1 00:04:14.131 00:04:14.131 ' 00:04:14.131 13:14:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:14.389 OK 00:04:14.389 13:14:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:14.389 00:04:14.389 real 0m0.214s 00:04:14.389 user 0m0.142s 00:04:14.389 sys 0m0.080s 00:04:14.389 13:14:03 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.389 13:14:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:14.389 ************************************ 00:04:14.389 END TEST rpc_client 00:04:14.389 ************************************ 00:04:14.389 13:14:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:14.389 13:14:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.389 13:14:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.389 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.389 ************************************ 00:04:14.389 START TEST json_config 00:04:14.389 ************************************ 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.389 13:14:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.389 13:14:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.389 13:14:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.389 13:14:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.389 13:14:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.389 13:14:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:14.389 13:14:03 json_config -- scripts/common.sh@345 -- # : 1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.389 13:14:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.389 13:14:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@353 -- # local d=1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.389 13:14:03 json_config -- scripts/common.sh@355 -- # echo 1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.389 13:14:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@353 -- # local d=2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.389 13:14:03 json_config -- scripts/common.sh@355 -- # echo 2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.389 13:14:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.389 13:14:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.389 13:14:03 json_config -- scripts/common.sh@368 -- # return 0 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.389 --rc genhtml_branch_coverage=1 00:04:14.389 --rc genhtml_function_coverage=1 00:04:14.389 --rc genhtml_legend=1 00:04:14.389 --rc geninfo_all_blocks=1 00:04:14.389 --rc geninfo_unexecuted_blocks=1 00:04:14.389 00:04:14.389 ' 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.389 --rc genhtml_branch_coverage=1 00:04:14.389 --rc genhtml_function_coverage=1 00:04:14.389 --rc genhtml_legend=1 00:04:14.389 --rc geninfo_all_blocks=1 00:04:14.389 --rc geninfo_unexecuted_blocks=1 00:04:14.389 00:04:14.389 ' 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.389 --rc genhtml_branch_coverage=1 00:04:14.389 --rc genhtml_function_coverage=1 00:04:14.389 --rc genhtml_legend=1 00:04:14.389 --rc geninfo_all_blocks=1 00:04:14.389 --rc geninfo_unexecuted_blocks=1 00:04:14.389 00:04:14.389 ' 00:04:14.389 13:14:03 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.390 --rc genhtml_branch_coverage=1 00:04:14.390 --rc genhtml_function_coverage=1 00:04:14.390 --rc genhtml_legend=1 00:04:14.390 --rc geninfo_all_blocks=1 00:04:14.390 --rc geninfo_unexecuted_blocks=1 00:04:14.390 00:04:14.390 ' 00:04:14.390 13:14:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.390 13:14:03 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:14.390 13:14:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.390 13:14:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.648 13:14:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.648 13:14:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.648 13:14:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.648 13:14:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.648 13:14:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.648 13:14:03 json_config -- paths/export.sh@5 -- # export PATH 00:04:14.648 13:14:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.648 13:14:03 json_config -- nvmf/common.sh@51 -- # : 0 00:04:14.648 13:14:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.648 13:14:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.648 13:14:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.649 13:14:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:14.649 INFO: JSON configuration test init 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.649 Waiting for target to run... 00:04:14.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.649 13:14:03 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:14.649 13:14:03 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.649 13:14:03 json_config -- json_config/common.sh@10 -- # shift 00:04:14.649 13:14:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.649 13:14:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.649 13:14:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.649 13:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.649 13:14:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.649 13:14:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57361 00:04:14.649 13:14:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.649 13:14:03 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:14.649 13:14:03 json_config -- json_config/common.sh@25 -- # waitforlisten 57361 /var/tmp/spdk_tgt.sock 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@835 -- # '[' -z 57361 ']' 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.649 13:14:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.649 [2024-11-17 13:14:03.701480] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:14.649 [2024-11-17 13:14:03.701769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57361 ] 00:04:15.216 [2024-11-17 13:14:04.147797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.216 [2024-11-17 13:14:04.195972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:15.782 13:14:04 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.782 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.782 13:14:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:15.782 13:14:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:15.782 13:14:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:16.040 [2024-11-17 13:14:05.121686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:16.298 13:14:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:16.299 13:14:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.299 13:14:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:16.299 13:14:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:16.299 13:14:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@54 -- # sort 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:16.557 13:14:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.557 13:14:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:16.557 13:14:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.557 13:14:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:16.557 13:14:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.557 13:14:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.816 MallocForNvmf0 00:04:16.816 13:14:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.816 13:14:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.075 MallocForNvmf1 00:04:17.075 13:14:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.075 13:14:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.333 [2024-11-17 13:14:06.506494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.333 13:14:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.333 13:14:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.591 13:14:06 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.591 13:14:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.850 13:14:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.850 13:14:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.109 13:14:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.109 13:14:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.368 [2024-11-17 13:14:07.462984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.368 13:14:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:18.368 13:14:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.368 13:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.368 13:14:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:18.368 13:14:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.368 13:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.368 13:14:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:18.368 13:14:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.368 13:14:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.628 MallocBdevForConfigChangeCheck 00:04:18.886 13:14:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:18.887 13:14:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.887 13:14:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.887 13:14:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:18.887 13:14:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.145 INFO: shutting down applications... 00:04:19.145 13:14:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:19.145 13:14:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:19.145 13:14:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:19.145 13:14:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:19.145 13:14:08 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.742 Calling clear_iscsi_subsystem 00:04:19.742 Calling clear_nvmf_subsystem 00:04:19.742 Calling clear_nbd_subsystem 00:04:19.742 Calling clear_ublk_subsystem 00:04:19.742 Calling clear_vhost_blk_subsystem 00:04:19.742 Calling clear_vhost_scsi_subsystem 00:04:19.742 Calling clear_bdev_subsystem 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.742 13:14:08 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:20.001 13:14:09 json_config -- json_config/json_config.sh@352 -- # break 00:04:20.001 13:14:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:20.001 13:14:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:20.001 13:14:09 json_config -- json_config/common.sh@31 -- # local app=target 00:04:20.001 13:14:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.001 13:14:09 json_config -- json_config/common.sh@35 -- # [[ -n 57361 ]] 00:04:20.001 13:14:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57361 00:04:20.001 13:14:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.001 13:14:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.001 13:14:09 json_config -- json_config/common.sh@41 -- # kill -0 57361 00:04:20.001 13:14:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.567 13:14:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.567 13:14:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.567 SPDK target shutdown done 00:04:20.567 INFO: relaunching applications... 00:04:20.567 13:14:09 json_config -- json_config/common.sh@41 -- # kill -0 57361 00:04:20.567 13:14:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.567 13:14:09 json_config -- json_config/common.sh@43 -- # break 00:04:20.567 13:14:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.567 13:14:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.567 13:14:09 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:20.567 13:14:09 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.567 13:14:09 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.567 13:14:09 json_config -- json_config/common.sh@10 -- # shift 00:04:20.567 13:14:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.567 13:14:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.567 13:14:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.567 13:14:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.567 13:14:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.567 13:14:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57556 00:04:20.567 13:14:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.567 13:14:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.567 Waiting for target to run... 00:04:20.567 13:14:09 json_config -- json_config/common.sh@25 -- # waitforlisten 57556 /var/tmp/spdk_tgt.sock 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@835 -- # '[' -z 57556 ']' 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.567 13:14:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.567 [2024-11-17 13:14:09.661203] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:20.567 [2024-11-17 13:14:09.661295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57556 ] 00:04:21.135 [2024-11-17 13:14:10.093383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.135 [2024-11-17 13:14:10.128002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.135 [2024-11-17 13:14:10.266644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.394 [2024-11-17 13:14:10.482871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.394 [2024-11-17 13:14:10.514956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.394 00:04:21.394 INFO: Checking if target configuration is the same... 00:04:21.394 13:14:10 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.394 13:14:10 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:21.394 13:14:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.394 13:14:10 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:21.394 13:14:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.394 13:14:10 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.394 13:14:10 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:21.394 13:14:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.394 + '[' 2 -ne 2 ']' 00:04:21.394 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.394 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.394 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.394 +++ basename /dev/fd/62 00:04:21.394 ++ mktemp /tmp/62.XXX 00:04:21.394 + tmp_file_1=/tmp/62.jhw 00:04:21.394 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.394 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.394 + tmp_file_2=/tmp/spdk_tgt_config.json.MQE 00:04:21.394 + ret=0 00:04:21.394 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.962 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.962 + diff -u /tmp/62.jhw /tmp/spdk_tgt_config.json.MQE 00:04:21.962 INFO: JSON config files are the same 00:04:21.962 + echo 'INFO: JSON config files are the same' 00:04:21.962 + rm /tmp/62.jhw /tmp/spdk_tgt_config.json.MQE 00:04:21.962 + exit 0 00:04:21.962 INFO: changing configuration and checking if this can be detected... 00:04:21.962 13:14:11 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:21.962 13:14:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.962 13:14:11 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.962 13:14:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.221 13:14:11 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.221 13:14:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:22.221 13:14:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.221 + '[' 2 -ne 2 ']' 00:04:22.221 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:22.221 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:22.221 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:22.221 +++ basename /dev/fd/62 00:04:22.221 ++ mktemp /tmp/62.XXX 00:04:22.221 + tmp_file_1=/tmp/62.idD 00:04:22.221 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.221 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.221 + tmp_file_2=/tmp/spdk_tgt_config.json.fYO 00:04:22.221 + ret=0 00:04:22.221 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.480 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.739 + diff -u /tmp/62.idD /tmp/spdk_tgt_config.json.fYO 00:04:22.739 + ret=1 00:04:22.739 + echo '=== Start of file: /tmp/62.idD ===' 00:04:22.739 + cat /tmp/62.idD 00:04:22.739 + echo '=== End of file: /tmp/62.idD ===' 00:04:22.739 + echo '' 00:04:22.739 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fYO ===' 00:04:22.739 + cat /tmp/spdk_tgt_config.json.fYO 00:04:22.739 + echo '=== End of file: /tmp/spdk_tgt_config.json.fYO ===' 00:04:22.739 + echo '' 00:04:22.739 + rm /tmp/62.idD /tmp/spdk_tgt_config.json.fYO 00:04:22.739 + exit 1 00:04:22.739 INFO: configuration change detected. 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:22.739 13:14:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.739 13:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 57556 ]] 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:22.739 13:14:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.739 13:14:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.740 13:14:11 json_config -- json_config/json_config.sh@330 -- # killprocess 57556 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@954 -- # '[' -z 57556 ']' 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@958 -- # kill -0 57556 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@959 -- # uname 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57556 00:04:22.740 killing process with pid 57556 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57556' 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@973 -- # kill 57556 00:04:22.740 13:14:11 json_config -- common/autotest_common.sh@978 -- # wait 57556 00:04:22.999 13:14:12 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.999 13:14:12 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:22.999 13:14:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.999 13:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.999 INFO: Success 00:04:22.999 13:14:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:22.999 13:14:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:22.999 00:04:22.999 real 0m8.667s 00:04:22.999 user 0m12.370s 00:04:22.999 sys 0m1.763s 00:04:22.999 ************************************ 00:04:22.999 END TEST json_config 00:04:22.999 ************************************ 00:04:22.999 13:14:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.999 13:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.999 13:14:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.999 13:14:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.999 13:14:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.999 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:22.999 ************************************ 00:04:22.999 START TEST json_config_extra_key 00:04:22.999 ************************************ 00:04:22.999 13:14:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.999 13:14:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.999 13:14:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.999 13:14:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.259 13:14:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.259 13:14:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.260 --rc genhtml_branch_coverage=1 00:04:23.260 --rc genhtml_function_coverage=1 00:04:23.260 --rc genhtml_legend=1 00:04:23.260 --rc geninfo_all_blocks=1 00:04:23.260 --rc geninfo_unexecuted_blocks=1 00:04:23.260 00:04:23.260 ' 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.260 --rc genhtml_branch_coverage=1 00:04:23.260 --rc genhtml_function_coverage=1 00:04:23.260 --rc genhtml_legend=1 00:04:23.260 --rc geninfo_all_blocks=1 00:04:23.260 --rc geninfo_unexecuted_blocks=1 00:04:23.260 00:04:23.260 ' 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.260 --rc genhtml_branch_coverage=1 00:04:23.260 --rc genhtml_function_coverage=1 00:04:23.260 --rc genhtml_legend=1 00:04:23.260 --rc geninfo_all_blocks=1 00:04:23.260 --rc geninfo_unexecuted_blocks=1 00:04:23.260 00:04:23.260 ' 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.260 --rc genhtml_branch_coverage=1 00:04:23.260 --rc genhtml_function_coverage=1 00:04:23.260 --rc genhtml_legend=1 00:04:23.260 --rc geninfo_all_blocks=1 00:04:23.260 --rc geninfo_unexecuted_blocks=1 00:04:23.260 00:04:23.260 ' 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.260 13:14:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.260 13:14:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.260 13:14:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.260 13:14:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.260 13:14:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.260 13:14:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.260 13:14:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.260 INFO: launching applications... 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.260 13:14:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57709 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.260 Waiting for target to run... 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57709 /var/tmp/spdk_tgt.sock 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57709 ']' 00:04:23.260 13:14:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.260 13:14:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.260 [2024-11-17 13:14:12.402969] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:23.260 [2024-11-17 13:14:12.403473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57709 ] 00:04:23.829 [2024-11-17 13:14:12.847376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.829 [2024-11-17 13:14:12.884903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.829 [2024-11-17 13:14:12.915048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:24.397 13:14:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.397 13:14:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.397 00:04:24.397 INFO: shutting down applications... 00:04:24.397 13:14:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.397 13:14:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57709 ]] 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57709 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57709 00:04:24.397 13:14:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57709 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.965 SPDK target shutdown done 00:04:24.965 13:14:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.965 Success 00:04:24.965 13:14:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.965 00:04:24.965 real 0m1.767s 00:04:24.965 user 0m1.644s 00:04:24.965 sys 0m0.453s 00:04:24.965 13:14:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.965 ************************************ 00:04:24.965 END TEST json_config_extra_key 00:04:24.965 ************************************ 00:04:24.965 13:14:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.965 13:14:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.965 13:14:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.965 13:14:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.965 13:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:24.965 ************************************ 00:04:24.965 START TEST alias_rpc 00:04:24.965 ************************************ 00:04:24.965 13:14:13 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.965 * Looking for test storage... 00:04:24.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:24.965 13:14:14 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.965 13:14:14 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.965 13:14:14 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.965 13:14:14 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.965 13:14:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.966 13:14:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.966 --rc genhtml_branch_coverage=1 00:04:24.966 --rc genhtml_function_coverage=1 00:04:24.966 --rc genhtml_legend=1 00:04:24.966 --rc geninfo_all_blocks=1 00:04:24.966 --rc geninfo_unexecuted_blocks=1 00:04:24.966 00:04:24.966 ' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.966 --rc genhtml_branch_coverage=1 00:04:24.966 --rc genhtml_function_coverage=1 00:04:24.966 --rc genhtml_legend=1 00:04:24.966 --rc geninfo_all_blocks=1 00:04:24.966 --rc geninfo_unexecuted_blocks=1 00:04:24.966 00:04:24.966 ' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.966 --rc genhtml_branch_coverage=1 00:04:24.966 --rc genhtml_function_coverage=1 00:04:24.966 --rc genhtml_legend=1 00:04:24.966 --rc geninfo_all_blocks=1 00:04:24.966 --rc geninfo_unexecuted_blocks=1 00:04:24.966 00:04:24.966 ' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.966 --rc genhtml_branch_coverage=1 00:04:24.966 --rc genhtml_function_coverage=1 00:04:24.966 --rc genhtml_legend=1 00:04:24.966 --rc geninfo_all_blocks=1 00:04:24.966 --rc geninfo_unexecuted_blocks=1 00:04:24.966 00:04:24.966 ' 00:04:24.966 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.966 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57783 00:04:24.966 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57783 00:04:24.966 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57783 ']' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.966 13:14:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.225 [2024-11-17 13:14:14.208746] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:25.225 [2024-11-17 13:14:14.208848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:04:25.225 [2024-11-17 13:14:14.348714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.225 [2024-11-17 13:14:14.390104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.483 [2024-11-17 13:14:14.457433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:25.483 13:14:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.483 13:14:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:25.483 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:26.051 13:14:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57783 00:04:26.051 13:14:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57783 ']' 00:04:26.051 13:14:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57783 00:04:26.052 13:14:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.052 13:14:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.052 13:14:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57783 00:04:26.052 13:14:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.052 killing process with pid 57783 00:04:26.052 13:14:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.052 13:14:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57783' 00:04:26.052 13:14:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 57783 00:04:26.052 13:14:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 57783 00:04:26.310 00:04:26.310 real 0m1.402s 00:04:26.310 user 0m1.482s 00:04:26.310 sys 0m0.434s 00:04:26.310 13:14:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.310 ************************************ 00:04:26.310 END TEST alias_rpc 00:04:26.310 ************************************ 00:04:26.310 13:14:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.310 13:14:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:26.310 13:14:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:26.310 13:14:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.310 13:14:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.310 13:14:15 -- common/autotest_common.sh@10 -- # set +x 00:04:26.310 ************************************ 00:04:26.310 START TEST spdkcli_tcp 00:04:26.310 ************************************ 00:04:26.310 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:26.310 * Looking for test storage... 00:04:26.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:26.310 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.310 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.310 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.569 13:14:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.569 --rc genhtml_branch_coverage=1 00:04:26.569 --rc genhtml_function_coverage=1 00:04:26.569 --rc genhtml_legend=1 00:04:26.569 --rc geninfo_all_blocks=1 00:04:26.569 --rc geninfo_unexecuted_blocks=1 00:04:26.569 00:04:26.569 ' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.569 --rc genhtml_branch_coverage=1 00:04:26.569 --rc genhtml_function_coverage=1 00:04:26.569 --rc genhtml_legend=1 00:04:26.569 --rc geninfo_all_blocks=1 00:04:26.569 --rc geninfo_unexecuted_blocks=1 00:04:26.569 00:04:26.569 ' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.569 --rc genhtml_branch_coverage=1 00:04:26.569 --rc genhtml_function_coverage=1 00:04:26.569 --rc genhtml_legend=1 00:04:26.569 --rc geninfo_all_blocks=1 00:04:26.569 --rc geninfo_unexecuted_blocks=1 00:04:26.569 00:04:26.569 ' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.569 --rc genhtml_branch_coverage=1 00:04:26.569 --rc genhtml_function_coverage=1 00:04:26.569 --rc genhtml_legend=1 00:04:26.569 --rc geninfo_all_blocks=1 00:04:26.569 --rc geninfo_unexecuted_blocks=1 00:04:26.569 00:04:26.569 ' 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57859 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.569 13:14:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57859 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57859 ']' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.569 13:14:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.569 [2024-11-17 13:14:15.679144] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:26.569 [2024-11-17 13:14:15.679252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57859 ] 00:04:26.828 [2024-11-17 13:14:15.821101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.828 [2024-11-17 13:14:15.874602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.828 [2024-11-17 13:14:15.874611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.828 [2024-11-17 13:14:15.940807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:27.764 13:14:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.764 13:14:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:27.764 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57876 00:04:27.764 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:27.764 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.764 [ 00:04:27.764 "bdev_malloc_delete", 00:04:27.764 "bdev_malloc_create", 00:04:27.764 "bdev_null_resize", 00:04:27.764 "bdev_null_delete", 00:04:27.764 "bdev_null_create", 00:04:27.764 "bdev_nvme_cuse_unregister", 00:04:27.764 "bdev_nvme_cuse_register", 00:04:27.764 "bdev_opal_new_user", 00:04:27.764 "bdev_opal_set_lock_state", 00:04:27.764 "bdev_opal_delete", 00:04:27.764 "bdev_opal_get_info", 00:04:27.764 "bdev_opal_create", 00:04:27.764 "bdev_nvme_opal_revert", 00:04:27.764 "bdev_nvme_opal_init", 00:04:27.764 "bdev_nvme_send_cmd", 00:04:27.764 "bdev_nvme_set_keys", 00:04:27.764 "bdev_nvme_get_path_iostat", 00:04:27.764 "bdev_nvme_get_mdns_discovery_info", 00:04:27.764 "bdev_nvme_stop_mdns_discovery", 00:04:27.764 "bdev_nvme_start_mdns_discovery", 00:04:27.764 "bdev_nvme_set_multipath_policy", 00:04:27.764 "bdev_nvme_set_preferred_path", 00:04:27.764 "bdev_nvme_get_io_paths", 00:04:27.764 "bdev_nvme_remove_error_injection", 00:04:27.764 "bdev_nvme_add_error_injection", 00:04:27.764 "bdev_nvme_get_discovery_info", 00:04:27.764 "bdev_nvme_stop_discovery", 00:04:27.764 "bdev_nvme_start_discovery", 00:04:27.764 "bdev_nvme_get_controller_health_info", 00:04:27.764 "bdev_nvme_disable_controller", 00:04:27.764 "bdev_nvme_enable_controller", 00:04:27.764 "bdev_nvme_reset_controller", 00:04:27.764 "bdev_nvme_get_transport_statistics", 00:04:27.764 "bdev_nvme_apply_firmware", 00:04:27.764 "bdev_nvme_detach_controller", 00:04:27.764 "bdev_nvme_get_controllers", 00:04:27.764 "bdev_nvme_attach_controller", 00:04:27.764 "bdev_nvme_set_hotplug", 00:04:27.764 "bdev_nvme_set_options", 00:04:27.764 "bdev_passthru_delete", 00:04:27.764 "bdev_passthru_create", 00:04:27.764 "bdev_lvol_set_parent_bdev", 00:04:27.764 "bdev_lvol_set_parent", 00:04:27.764 "bdev_lvol_check_shallow_copy", 00:04:27.764 "bdev_lvol_start_shallow_copy", 00:04:27.764 "bdev_lvol_grow_lvstore", 00:04:27.764 "bdev_lvol_get_lvols", 00:04:27.764 "bdev_lvol_get_lvstores", 00:04:27.764 "bdev_lvol_delete", 00:04:27.764 "bdev_lvol_set_read_only", 00:04:27.764 "bdev_lvol_resize", 00:04:27.764 "bdev_lvol_decouple_parent", 00:04:27.764 "bdev_lvol_inflate", 00:04:27.765 "bdev_lvol_rename", 00:04:27.765 "bdev_lvol_clone_bdev", 00:04:27.765 "bdev_lvol_clone", 00:04:27.765 "bdev_lvol_snapshot", 00:04:27.765 "bdev_lvol_create", 00:04:27.765 "bdev_lvol_delete_lvstore", 00:04:27.765 "bdev_lvol_rename_lvstore", 00:04:27.765 "bdev_lvol_create_lvstore", 00:04:27.765 "bdev_raid_set_options", 00:04:27.765 "bdev_raid_remove_base_bdev", 00:04:27.765 "bdev_raid_add_base_bdev", 00:04:27.765 "bdev_raid_delete", 00:04:27.765 "bdev_raid_create", 00:04:27.765 "bdev_raid_get_bdevs", 00:04:27.765 "bdev_error_inject_error", 00:04:27.765 "bdev_error_delete", 00:04:27.765 "bdev_error_create", 00:04:27.765 "bdev_split_delete", 00:04:27.765 "bdev_split_create", 00:04:27.765 "bdev_delay_delete", 00:04:27.765 "bdev_delay_create", 00:04:27.765 "bdev_delay_update_latency", 00:04:27.765 "bdev_zone_block_delete", 00:04:27.765 "bdev_zone_block_create", 00:04:27.765 "blobfs_create", 00:04:27.765 "blobfs_detect", 00:04:27.765 "blobfs_set_cache_size", 00:04:27.765 "bdev_aio_delete", 00:04:27.765 "bdev_aio_rescan", 00:04:27.765 "bdev_aio_create", 00:04:27.765 "bdev_ftl_set_property", 00:04:27.765 "bdev_ftl_get_properties", 00:04:27.765 "bdev_ftl_get_stats", 00:04:27.765 "bdev_ftl_unmap", 00:04:27.765 "bdev_ftl_unload", 00:04:27.765 "bdev_ftl_delete", 00:04:27.765 "bdev_ftl_load", 00:04:27.765 "bdev_ftl_create", 00:04:27.765 "bdev_virtio_attach_controller", 00:04:27.765 "bdev_virtio_scsi_get_devices", 00:04:27.765 "bdev_virtio_detach_controller", 00:04:27.765 "bdev_virtio_blk_set_hotplug", 00:04:27.765 "bdev_iscsi_delete", 00:04:27.765 "bdev_iscsi_create", 00:04:27.765 "bdev_iscsi_set_options", 00:04:27.765 "bdev_uring_delete", 00:04:27.765 "bdev_uring_rescan", 00:04:27.765 "bdev_uring_create", 00:04:27.765 "accel_error_inject_error", 00:04:27.765 "ioat_scan_accel_module", 00:04:27.765 "dsa_scan_accel_module", 00:04:27.765 "iaa_scan_accel_module", 00:04:27.765 "keyring_file_remove_key", 00:04:27.765 "keyring_file_add_key", 00:04:27.765 "keyring_linux_set_options", 00:04:27.765 "fsdev_aio_delete", 00:04:27.765 "fsdev_aio_create", 00:04:27.765 "iscsi_get_histogram", 00:04:27.765 "iscsi_enable_histogram", 00:04:27.765 "iscsi_set_options", 00:04:27.765 "iscsi_get_auth_groups", 00:04:27.765 "iscsi_auth_group_remove_secret", 00:04:27.765 "iscsi_auth_group_add_secret", 00:04:27.765 "iscsi_delete_auth_group", 00:04:27.765 "iscsi_create_auth_group", 00:04:27.765 "iscsi_set_discovery_auth", 00:04:27.765 "iscsi_get_options", 00:04:27.765 "iscsi_target_node_request_logout", 00:04:27.765 "iscsi_target_node_set_redirect", 00:04:27.765 "iscsi_target_node_set_auth", 00:04:27.765 "iscsi_target_node_add_lun", 00:04:27.765 "iscsi_get_stats", 00:04:27.765 "iscsi_get_connections", 00:04:27.765 "iscsi_portal_group_set_auth", 00:04:27.765 "iscsi_start_portal_group", 00:04:27.765 "iscsi_delete_portal_group", 00:04:27.765 "iscsi_create_portal_group", 00:04:27.765 "iscsi_get_portal_groups", 00:04:27.765 "iscsi_delete_target_node", 00:04:27.765 "iscsi_target_node_remove_pg_ig_maps", 00:04:27.765 "iscsi_target_node_add_pg_ig_maps", 00:04:27.765 "iscsi_create_target_node", 00:04:27.765 "iscsi_get_target_nodes", 00:04:27.765 "iscsi_delete_initiator_group", 00:04:27.765 "iscsi_initiator_group_remove_initiators", 00:04:27.765 "iscsi_initiator_group_add_initiators", 00:04:27.765 "iscsi_create_initiator_group", 00:04:27.765 "iscsi_get_initiator_groups", 00:04:27.765 "nvmf_set_crdt", 00:04:27.765 "nvmf_set_config", 00:04:27.765 "nvmf_set_max_subsystems", 00:04:27.765 "nvmf_stop_mdns_prr", 00:04:27.765 "nvmf_publish_mdns_prr", 00:04:27.765 "nvmf_subsystem_get_listeners", 00:04:27.765 "nvmf_subsystem_get_qpairs", 00:04:27.765 "nvmf_subsystem_get_controllers", 00:04:27.765 "nvmf_get_stats", 00:04:27.765 "nvmf_get_transports", 00:04:27.765 "nvmf_create_transport", 00:04:27.765 "nvmf_get_targets", 00:04:27.765 "nvmf_delete_target", 00:04:27.765 "nvmf_create_target", 00:04:27.765 "nvmf_subsystem_allow_any_host", 00:04:27.765 "nvmf_subsystem_set_keys", 00:04:27.765 "nvmf_subsystem_remove_host", 00:04:27.765 "nvmf_subsystem_add_host", 00:04:27.765 "nvmf_ns_remove_host", 00:04:27.765 "nvmf_ns_add_host", 00:04:27.765 "nvmf_subsystem_remove_ns", 00:04:27.765 "nvmf_subsystem_set_ns_ana_group", 00:04:27.765 "nvmf_subsystem_add_ns", 00:04:27.765 "nvmf_subsystem_listener_set_ana_state", 00:04:27.765 "nvmf_discovery_get_referrals", 00:04:27.765 "nvmf_discovery_remove_referral", 00:04:27.765 "nvmf_discovery_add_referral", 00:04:27.765 "nvmf_subsystem_remove_listener", 00:04:27.765 "nvmf_subsystem_add_listener", 00:04:27.765 "nvmf_delete_subsystem", 00:04:27.765 "nvmf_create_subsystem", 00:04:27.765 "nvmf_get_subsystems", 00:04:27.765 "env_dpdk_get_mem_stats", 00:04:27.765 "nbd_get_disks", 00:04:27.765 "nbd_stop_disk", 00:04:27.765 "nbd_start_disk", 00:04:27.765 "ublk_recover_disk", 00:04:27.765 "ublk_get_disks", 00:04:27.765 "ublk_stop_disk", 00:04:27.765 "ublk_start_disk", 00:04:27.765 "ublk_destroy_target", 00:04:27.765 "ublk_create_target", 00:04:27.765 "virtio_blk_create_transport", 00:04:27.765 "virtio_blk_get_transports", 00:04:27.765 "vhost_controller_set_coalescing", 00:04:27.765 "vhost_get_controllers", 00:04:27.765 "vhost_delete_controller", 00:04:27.765 "vhost_create_blk_controller", 00:04:27.765 "vhost_scsi_controller_remove_target", 00:04:27.765 "vhost_scsi_controller_add_target", 00:04:27.765 "vhost_start_scsi_controller", 00:04:27.765 "vhost_create_scsi_controller", 00:04:27.765 "thread_set_cpumask", 00:04:27.765 "scheduler_set_options", 00:04:27.765 "framework_get_governor", 00:04:27.765 "framework_get_scheduler", 00:04:27.765 "framework_set_scheduler", 00:04:27.765 "framework_get_reactors", 00:04:27.765 "thread_get_io_channels", 00:04:27.765 "thread_get_pollers", 00:04:27.765 "thread_get_stats", 00:04:27.765 "framework_monitor_context_switch", 00:04:27.765 "spdk_kill_instance", 00:04:27.765 "log_enable_timestamps", 00:04:27.765 "log_get_flags", 00:04:27.765 "log_clear_flag", 00:04:27.765 "log_set_flag", 00:04:27.765 "log_get_level", 00:04:27.765 "log_set_level", 00:04:27.765 "log_get_print_level", 00:04:27.765 "log_set_print_level", 00:04:27.765 "framework_enable_cpumask_locks", 00:04:27.765 "framework_disable_cpumask_locks", 00:04:27.765 "framework_wait_init", 00:04:27.765 "framework_start_init", 00:04:27.765 "scsi_get_devices", 00:04:27.765 "bdev_get_histogram", 00:04:27.765 "bdev_enable_histogram", 00:04:27.765 "bdev_set_qos_limit", 00:04:27.765 "bdev_set_qd_sampling_period", 00:04:27.765 "bdev_get_bdevs", 00:04:27.765 "bdev_reset_iostat", 00:04:27.765 "bdev_get_iostat", 00:04:27.765 "bdev_examine", 00:04:27.765 "bdev_wait_for_examine", 00:04:27.765 "bdev_set_options", 00:04:27.765 "accel_get_stats", 00:04:27.765 "accel_set_options", 00:04:27.765 "accel_set_driver", 00:04:27.765 "accel_crypto_key_destroy", 00:04:27.765 "accel_crypto_keys_get", 00:04:27.765 "accel_crypto_key_create", 00:04:27.765 "accel_assign_opc", 00:04:27.765 "accel_get_module_info", 00:04:27.765 "accel_get_opc_assignments", 00:04:27.765 "vmd_rescan", 00:04:27.765 "vmd_remove_device", 00:04:27.765 "vmd_enable", 00:04:27.765 "sock_get_default_impl", 00:04:27.765 "sock_set_default_impl", 00:04:27.765 "sock_impl_set_options", 00:04:27.765 "sock_impl_get_options", 00:04:27.765 "iobuf_get_stats", 00:04:27.765 "iobuf_set_options", 00:04:27.765 "keyring_get_keys", 00:04:27.765 "framework_get_pci_devices", 00:04:27.765 "framework_get_config", 00:04:27.765 "framework_get_subsystems", 00:04:27.765 "fsdev_set_opts", 00:04:27.765 "fsdev_get_opts", 00:04:27.765 "trace_get_info", 00:04:27.765 "trace_get_tpoint_group_mask", 00:04:27.765 "trace_disable_tpoint_group", 00:04:27.765 "trace_enable_tpoint_group", 00:04:27.765 "trace_clear_tpoint_mask", 00:04:27.765 "trace_set_tpoint_mask", 00:04:27.765 "notify_get_notifications", 00:04:27.765 "notify_get_types", 00:04:27.765 "spdk_get_version", 00:04:27.765 "rpc_get_methods" 00:04:27.765 ] 00:04:27.765 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.765 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:27.765 13:14:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57859 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57859 ']' 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57859 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:27.765 13:14:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57859 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.766 killing process with pid 57859 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57859' 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57859 00:04:27.766 13:14:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57859 00:04:28.334 00:04:28.334 real 0m1.875s 00:04:28.334 user 0m3.452s 00:04:28.334 sys 0m0.489s 00:04:28.334 ************************************ 00:04:28.334 END TEST spdkcli_tcp 00:04:28.334 ************************************ 00:04:28.334 13:14:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.334 13:14:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.334 13:14:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.334 13:14:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.334 13:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.334 13:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.334 ************************************ 00:04:28.334 START TEST dpdk_mem_utility 00:04:28.334 ************************************ 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.334 * Looking for test storage... 00:04:28.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.334 13:14:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.334 --rc genhtml_branch_coverage=1 00:04:28.334 --rc genhtml_function_coverage=1 00:04:28.334 --rc genhtml_legend=1 00:04:28.334 --rc geninfo_all_blocks=1 00:04:28.334 --rc geninfo_unexecuted_blocks=1 00:04:28.334 00:04:28.334 ' 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.334 --rc genhtml_branch_coverage=1 00:04:28.334 --rc genhtml_function_coverage=1 00:04:28.334 --rc genhtml_legend=1 00:04:28.334 --rc geninfo_all_blocks=1 00:04:28.334 --rc geninfo_unexecuted_blocks=1 00:04:28.334 00:04:28.334 ' 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.334 --rc genhtml_branch_coverage=1 00:04:28.334 --rc genhtml_function_coverage=1 00:04:28.334 --rc genhtml_legend=1 00:04:28.334 --rc geninfo_all_blocks=1 00:04:28.334 --rc geninfo_unexecuted_blocks=1 00:04:28.334 00:04:28.334 ' 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.334 --rc genhtml_branch_coverage=1 00:04:28.334 --rc genhtml_function_coverage=1 00:04:28.334 --rc genhtml_legend=1 00:04:28.334 --rc geninfo_all_blocks=1 00:04:28.334 --rc geninfo_unexecuted_blocks=1 00:04:28.334 00:04:28.334 ' 00:04:28.334 13:14:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.334 13:14:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57957 00:04:28.334 13:14:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57957 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57957 ']' 00:04:28.334 13:14:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.334 13:14:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.593 [2024-11-17 13:14:17.582666] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:28.593 [2024-11-17 13:14:17.582827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:04:28.593 [2024-11-17 13:14:17.721644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.593 [2024-11-17 13:14:17.766385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.851 [2024-11-17 13:14:17.833350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:28.851 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.851 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:28.851 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.851 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.851 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.851 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.851 { 00:04:28.851 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.851 } 00:04:28.851 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.851 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:29.111 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:29.111 1 heaps totaling size 810.000000 MiB 00:04:29.111 size: 810.000000 MiB heap id: 0 00:04:29.111 end heaps---------- 00:04:29.111 9 mempools totaling size 595.772034 MiB 00:04:29.111 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:29.111 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:29.111 size: 92.545471 MiB name: bdev_io_57957 00:04:29.111 size: 50.003479 MiB name: msgpool_57957 00:04:29.111 size: 36.509338 MiB name: fsdev_io_57957 00:04:29.111 size: 21.763794 MiB name: PDU_Pool 00:04:29.111 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:29.111 size: 4.133484 MiB name: evtpool_57957 00:04:29.111 size: 0.026123 MiB name: Session_Pool 00:04:29.111 end mempools------- 00:04:29.111 6 memzones totaling size 4.142822 MiB 00:04:29.111 size: 1.000366 MiB name: RG_ring_0_57957 00:04:29.111 size: 1.000366 MiB name: RG_ring_1_57957 00:04:29.111 size: 1.000366 MiB name: RG_ring_4_57957 00:04:29.111 size: 1.000366 MiB name: RG_ring_5_57957 00:04:29.111 size: 0.125366 MiB name: RG_ring_2_57957 00:04:29.111 size: 0.015991 MiB name: RG_ring_3_57957 00:04:29.111 end memzones------- 00:04:29.111 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:29.111 heap id: 0 total size: 810.000000 MiB number of busy elements: 315 number of free elements: 15 00:04:29.111 list of free elements. size: 10.812866 MiB 00:04:29.111 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:29.111 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:29.112 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:29.112 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:29.112 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:29.112 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:29.112 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:29.112 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:29.112 element at address: 0x20001a600000 with size: 0.567322 MiB 00:04:29.112 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:29.112 element at address: 0x200000c00000 with size: 0.487000 MiB 00:04:29.112 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:29.112 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:29.112 element at address: 0x200027a00000 with size: 0.395752 MiB 00:04:29.112 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:29.112 list of standard malloc elements. size: 199.268250 MiB 00:04:29.112 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:29.112 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:29.112 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:29.112 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:29.112 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:29.112 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:29.112 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:29.112 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:29.112 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:29.112 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:29.112 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:29.112 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691480 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691540 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691600 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691780 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691840 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691900 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692080 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692140 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692200 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692380 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692440 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692500 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692680 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692740 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692800 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:29.113 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a65500 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:29.113 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:29.114 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:29.114 list of memzone associated elements. size: 599.918884 MiB 00:04:29.114 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:29.114 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:29.114 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:29.114 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:29.114 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:29.114 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57957_0 00:04:29.114 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:29.114 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57957_0 00:04:29.114 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:29.114 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57957_0 00:04:29.114 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:29.114 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:29.114 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:29.114 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:29.114 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:29.114 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57957_0 00:04:29.114 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:29.114 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57957 00:04:29.114 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:29.114 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57957 00:04:29.114 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:29.114 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:29.114 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:29.114 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:29.114 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:29.114 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:29.114 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:29.114 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:29.114 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:29.114 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57957 00:04:29.114 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:29.114 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57957 00:04:29.114 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:29.114 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57957 00:04:29.114 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:29.114 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57957 00:04:29.114 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:29.114 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57957 00:04:29.114 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:29.114 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57957 00:04:29.114 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:29.114 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:29.114 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:29.114 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:29.114 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:29.114 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:29.114 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:29.114 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57957 00:04:29.114 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:29.114 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57957 00:04:29.114 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:29.114 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:29.114 element at address: 0x200027a65680 with size: 0.023743 MiB 00:04:29.114 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:29.114 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:29.114 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57957 00:04:29.114 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:04:29.114 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:29.114 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:29.114 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57957 00:04:29.114 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:29.114 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57957 00:04:29.114 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:29.114 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57957 00:04:29.114 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:04:29.114 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:29.114 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:29.114 13:14:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57957 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57957 ']' 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57957 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57957 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.114 killing process with pid 57957 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57957' 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57957 00:04:29.114 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57957 00:04:29.373 00:04:29.373 real 0m1.227s 00:04:29.373 user 0m1.186s 00:04:29.373 sys 0m0.403s 00:04:29.373 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.373 13:14:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.373 ************************************ 00:04:29.373 END TEST dpdk_mem_utility 00:04:29.373 ************************************ 00:04:29.631 13:14:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.631 13:14:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.631 13:14:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.631 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:04:29.631 ************************************ 00:04:29.631 START TEST event 00:04:29.631 ************************************ 00:04:29.631 13:14:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.631 * Looking for test storage... 00:04:29.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.631 13:14:18 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.631 13:14:18 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.631 13:14:18 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.631 13:14:18 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.631 13:14:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.631 13:14:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.632 13:14:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.632 13:14:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.632 13:14:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.632 13:14:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.632 13:14:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.632 13:14:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.632 13:14:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.632 13:14:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.632 13:14:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.632 13:14:18 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.632 13:14:18 event -- scripts/common.sh@345 -- # : 1 00:04:29.632 13:14:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.632 13:14:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.632 13:14:18 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.632 13:14:18 event -- scripts/common.sh@353 -- # local d=1 00:04:29.632 13:14:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.632 13:14:18 event -- scripts/common.sh@355 -- # echo 1 00:04:29.632 13:14:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.632 13:14:18 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.632 13:14:18 event -- scripts/common.sh@353 -- # local d=2 00:04:29.632 13:14:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.632 13:14:18 event -- scripts/common.sh@355 -- # echo 2 00:04:29.632 13:14:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.632 13:14:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.632 13:14:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.632 13:14:18 event -- scripts/common.sh@368 -- # return 0 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.632 --rc genhtml_branch_coverage=1 00:04:29.632 --rc genhtml_function_coverage=1 00:04:29.632 --rc genhtml_legend=1 00:04:29.632 --rc geninfo_all_blocks=1 00:04:29.632 --rc geninfo_unexecuted_blocks=1 00:04:29.632 00:04:29.632 ' 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.632 --rc genhtml_branch_coverage=1 00:04:29.632 --rc genhtml_function_coverage=1 00:04:29.632 --rc genhtml_legend=1 00:04:29.632 --rc geninfo_all_blocks=1 00:04:29.632 --rc geninfo_unexecuted_blocks=1 00:04:29.632 00:04:29.632 ' 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.632 --rc genhtml_branch_coverage=1 00:04:29.632 --rc genhtml_function_coverage=1 00:04:29.632 --rc genhtml_legend=1 00:04:29.632 --rc geninfo_all_blocks=1 00:04:29.632 --rc geninfo_unexecuted_blocks=1 00:04:29.632 00:04:29.632 ' 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.632 --rc genhtml_branch_coverage=1 00:04:29.632 --rc genhtml_function_coverage=1 00:04:29.632 --rc genhtml_legend=1 00:04:29.632 --rc geninfo_all_blocks=1 00:04:29.632 --rc geninfo_unexecuted_blocks=1 00:04:29.632 00:04:29.632 ' 00:04:29.632 13:14:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.632 13:14:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.632 13:14:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:29.632 13:14:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.632 13:14:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.632 ************************************ 00:04:29.632 START TEST event_perf 00:04:29.632 ************************************ 00:04:29.632 13:14:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.632 Running I/O for 1 seconds...[2024-11-17 13:14:18.827528] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:29.632 [2024-11-17 13:14:18.827641] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58030 ] 00:04:29.891 [2024-11-17 13:14:18.971140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.891 [2024-11-17 13:14:19.014449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.891 [2024-11-17 13:14:19.014596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.891 [2024-11-17 13:14:19.014700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.891 Running I/O for 1 seconds...[2024-11-17 13:14:19.014703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.278 00:04:31.278 lcore 0: 196443 00:04:31.278 lcore 1: 196443 00:04:31.278 lcore 2: 196444 00:04:31.278 lcore 3: 196443 00:04:31.278 done. 00:04:31.278 00:04:31.278 real 0m1.267s 00:04:31.278 user 0m4.095s 00:04:31.278 sys 0m0.054s 00:04:31.278 13:14:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.278 13:14:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:31.278 ************************************ 00:04:31.278 END TEST event_perf 00:04:31.278 ************************************ 00:04:31.278 13:14:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:31.278 13:14:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:31.278 13:14:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.278 13:14:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.278 ************************************ 00:04:31.278 START TEST event_reactor 00:04:31.278 ************************************ 00:04:31.278 13:14:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:31.278 [2024-11-17 13:14:20.142481] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:31.278 [2024-11-17 13:14:20.142570] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 00:04:31.278 [2024-11-17 13:14:20.286671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.278 [2024-11-17 13:14:20.331550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.265 test_start 00:04:32.265 oneshot 00:04:32.265 tick 100 00:04:32.265 tick 100 00:04:32.265 tick 250 00:04:32.265 tick 100 00:04:32.265 tick 100 00:04:32.265 tick 250 00:04:32.265 tick 500 00:04:32.265 tick 100 00:04:32.265 tick 100 00:04:32.265 tick 100 00:04:32.265 tick 250 00:04:32.265 tick 100 00:04:32.265 tick 100 00:04:32.265 test_end 00:04:32.265 00:04:32.265 real 0m1.255s 00:04:32.265 user 0m1.114s 00:04:32.265 sys 0m0.035s 00:04:32.265 13:14:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.265 13:14:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 ************************************ 00:04:32.265 END TEST event_reactor 00:04:32.265 ************************************ 00:04:32.265 13:14:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.265 13:14:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:32.265 13:14:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.265 13:14:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.265 ************************************ 00:04:32.265 START TEST event_reactor_perf 00:04:32.265 ************************************ 00:04:32.265 13:14:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.265 [2024-11-17 13:14:21.449330] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:32.265 [2024-11-17 13:14:21.449431] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:04:32.538 [2024-11-17 13:14:21.597916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.538 [2024-11-17 13:14:21.648057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.474 test_start 00:04:33.474 test_end 00:04:33.474 Performance: 386901 events per second 00:04:33.732 00:04:33.732 real 0m1.264s 00:04:33.732 user 0m1.116s 00:04:33.732 sys 0m0.043s 00:04:33.732 13:14:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.732 13:14:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.732 ************************************ 00:04:33.732 END TEST event_reactor_perf 00:04:33.732 ************************************ 00:04:33.732 13:14:22 event -- event/event.sh@49 -- # uname -s 00:04:33.732 13:14:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.732 13:14:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.732 13:14:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.732 13:14:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.732 13:14:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.732 ************************************ 00:04:33.732 START TEST event_scheduler 00:04:33.732 ************************************ 00:04:33.732 13:14:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.732 * Looking for test storage... 00:04:33.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.732 13:14:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.732 13:14:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.732 13:14:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.732 13:14:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:33.732 13:14:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.733 13:14:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.733 13:14:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.733 13:14:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.733 --rc genhtml_branch_coverage=1 00:04:33.733 --rc genhtml_function_coverage=1 00:04:33.733 --rc genhtml_legend=1 00:04:33.733 --rc geninfo_all_blocks=1 00:04:33.733 --rc geninfo_unexecuted_blocks=1 00:04:33.733 00:04:33.733 ' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.733 --rc genhtml_branch_coverage=1 00:04:33.733 --rc genhtml_function_coverage=1 00:04:33.733 --rc genhtml_legend=1 00:04:33.733 --rc geninfo_all_blocks=1 00:04:33.733 --rc geninfo_unexecuted_blocks=1 00:04:33.733 00:04:33.733 ' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.733 --rc genhtml_branch_coverage=1 00:04:33.733 --rc genhtml_function_coverage=1 00:04:33.733 --rc genhtml_legend=1 00:04:33.733 --rc geninfo_all_blocks=1 00:04:33.733 --rc geninfo_unexecuted_blocks=1 00:04:33.733 00:04:33.733 ' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.733 --rc genhtml_branch_coverage=1 00:04:33.733 --rc genhtml_function_coverage=1 00:04:33.733 --rc genhtml_legend=1 00:04:33.733 --rc geninfo_all_blocks=1 00:04:33.733 --rc geninfo_unexecuted_blocks=1 00:04:33.733 00:04:33.733 ' 00:04:33.733 13:14:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.733 13:14:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58168 00:04:33.733 13:14:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.733 13:14:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.733 13:14:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58168 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58168 ']' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.733 13:14:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.991 [2024-11-17 13:14:22.977841] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:33.991 [2024-11-17 13:14:22.977927] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58168 ] 00:04:33.991 [2024-11-17 13:14:23.128233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.991 [2024-11-17 13:14:23.186142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.991 [2024-11-17 13:14:23.189797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.991 [2024-11-17 13:14:23.189913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.991 [2024-11-17 13:14:23.189923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:34.250 13:14:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.250 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.250 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.250 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.250 POWER: Cannot set governor of lcore 0 to performance 00:04:34.250 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.250 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.250 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.250 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.250 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:34.250 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.250 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.250 [2024-11-17 13:14:23.278913] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.250 [2024-11-17 13:14:23.278926] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:34.250 [2024-11-17 13:14:23.278939] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.250 [2024-11-17 13:14:23.278952] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.250 [2024-11-17 13:14:23.278960] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.250 [2024-11-17 13:14:23.278967] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.250 13:14:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.250 [2024-11-17 13:14:23.346021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.250 [2024-11-17 13:14:23.383524] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.250 13:14:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.250 13:14:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.250 ************************************ 00:04:34.250 START TEST scheduler_create_thread 00:04:34.250 ************************************ 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.250 2 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.250 3 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.250 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 4 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 5 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 6 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 7 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 8 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.251 9 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.251 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.510 10 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.510 13:14:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.078 13:14:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.078 13:14:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:35.078 13:14:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:35.078 13:14:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.079 13:14:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.016 ************************************ 00:04:36.016 END TEST scheduler_create_thread 00:04:36.016 ************************************ 00:04:36.016 13:14:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.016 00:04:36.016 real 0m1.754s 00:04:36.016 user 0m0.014s 00:04:36.016 sys 0m0.004s 00:04:36.016 13:14:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.016 13:14:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.016 13:14:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:36.016 13:14:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58168 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58168 ']' 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58168 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58168 00:04:36.016 killing process with pid 58168 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58168' 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58168 00:04:36.016 13:14:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58168 00:04:36.582 [2024-11-17 13:14:25.629800] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:36.841 00:04:36.841 real 0m3.082s 00:04:36.841 user 0m3.963s 00:04:36.841 sys 0m0.347s 00:04:36.841 13:14:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.841 ************************************ 00:04:36.841 END TEST event_scheduler 00:04:36.841 ************************************ 00:04:36.841 13:14:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.841 13:14:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:36.841 13:14:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:36.841 13:14:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.841 13:14:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.841 13:14:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.841 ************************************ 00:04:36.841 START TEST app_repeat 00:04:36.841 ************************************ 00:04:36.841 13:14:25 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58249 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.841 Process app_repeat pid: 58249 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58249' 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:36.841 spdk_app_start Round 0 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:36.841 13:14:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58249 /var/tmp/spdk-nbd.sock 00:04:36.841 13:14:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58249 ']' 00:04:36.842 13:14:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.842 13:14:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.842 13:14:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.842 13:14:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.842 13:14:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.842 [2024-11-17 13:14:25.910647] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:36.842 [2024-11-17 13:14:25.911287] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58249 ] 00:04:36.842 [2024-11-17 13:14:26.057571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.100 [2024-11-17 13:14:26.113796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.100 [2024-11-17 13:14:26.113819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.100 [2024-11-17 13:14:26.167861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.100 13:14:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.100 13:14:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:37.100 13:14:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.360 Malloc0 00:04:37.360 13:14:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.928 Malloc1 00:04:37.928 13:14:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.928 13:14:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.187 /dev/nbd0 00:04:38.187 13:14:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.187 13:14:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.187 1+0 records in 00:04:38.187 1+0 records out 00:04:38.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251994 s, 16.3 MB/s 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.187 13:14:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.187 13:14:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.187 13:14:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.187 13:14:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.445 /dev/nbd1 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.445 1+0 records in 00:04:38.445 1+0 records out 00:04:38.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297411 s, 13.8 MB/s 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.445 13:14:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.445 13:14:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.703 13:14:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.703 { 00:04:38.703 "nbd_device": "/dev/nbd0", 00:04:38.703 "bdev_name": "Malloc0" 00:04:38.703 }, 00:04:38.703 { 00:04:38.703 "nbd_device": "/dev/nbd1", 00:04:38.703 "bdev_name": "Malloc1" 00:04:38.703 } 00:04:38.703 ]' 00:04:38.703 13:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.703 { 00:04:38.703 "nbd_device": "/dev/nbd0", 00:04:38.703 "bdev_name": "Malloc0" 00:04:38.703 }, 00:04:38.703 { 00:04:38.703 "nbd_device": "/dev/nbd1", 00:04:38.703 "bdev_name": "Malloc1" 00:04:38.703 } 00:04:38.703 ]' 00:04:38.703 13:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.962 /dev/nbd1' 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.962 /dev/nbd1' 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.962 256+0 records in 00:04:38.962 256+0 records out 00:04:38.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110648 s, 94.8 MB/s 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.962 256+0 records in 00:04:38.962 256+0 records out 00:04:38.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213317 s, 49.2 MB/s 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.962 13:14:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.962 256+0 records in 00:04:38.962 256+0 records out 00:04:38.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295722 s, 35.5 MB/s 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.962 13:14:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.963 13:14:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.223 13:14:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.483 13:14:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.051 13:14:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.051 13:14:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.051 13:14:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.051 13:14:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.051 13:14:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.309 13:14:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:40.309 [2024-11-17 13:14:29.427819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.309 [2024-11-17 13:14:29.466903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.309 [2024-11-17 13:14:29.466915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.309 [2024-11-17 13:14:29.520247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.309 [2024-11-17 13:14:29.520324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.309 [2024-11-17 13:14:29.520337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.598 spdk_app_start Round 1 00:04:43.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.598 13:14:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.598 13:14:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:43.598 13:14:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58249 /var/tmp/spdk-nbd.sock 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58249 ']' 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.598 13:14:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.598 13:14:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.864 Malloc0 00:04:43.864 13:14:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.123 Malloc1 00:04:44.123 13:14:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.123 13:14:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.383 /dev/nbd0 00:04:44.383 13:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.383 13:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.383 1+0 records in 00:04:44.383 1+0 records out 00:04:44.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339748 s, 12.1 MB/s 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.383 13:14:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.383 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.383 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.383 13:14:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.950 /dev/nbd1 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.951 1+0 records in 00:04:44.951 1+0 records out 00:04:44.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332725 s, 12.3 MB/s 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.951 13:14:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.951 13:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.210 { 00:04:45.210 "nbd_device": "/dev/nbd0", 00:04:45.210 "bdev_name": "Malloc0" 00:04:45.210 }, 00:04:45.210 { 00:04:45.210 "nbd_device": "/dev/nbd1", 00:04:45.210 "bdev_name": "Malloc1" 00:04:45.210 } 00:04:45.210 ]' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.210 { 00:04:45.210 "nbd_device": "/dev/nbd0", 00:04:45.210 "bdev_name": "Malloc0" 00:04:45.210 }, 00:04:45.210 { 00:04:45.210 "nbd_device": "/dev/nbd1", 00:04:45.210 "bdev_name": "Malloc1" 00:04:45.210 } 00:04:45.210 ]' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.210 /dev/nbd1' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.210 /dev/nbd1' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.210 256+0 records in 00:04:45.210 256+0 records out 00:04:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636872 s, 165 MB/s 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.210 256+0 records in 00:04:45.210 256+0 records out 00:04:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224261 s, 46.8 MB/s 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.210 256+0 records in 00:04:45.210 256+0 records out 00:04:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238945 s, 43.9 MB/s 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.210 13:14:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.469 13:14:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.727 13:14:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.986 13:14:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.986 13:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.986 13:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.244 13:14:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.244 13:14:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.504 13:14:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.504 [2024-11-17 13:14:35.719652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.763 [2024-11-17 13:14:35.761383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.763 [2024-11-17 13:14:35.761394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.763 [2024-11-17 13:14:35.814632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:46.763 [2024-11-17 13:14:35.814750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.763 [2024-11-17 13:14:35.814762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.047 spdk_app_start Round 2 00:04:50.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.047 13:14:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.047 13:14:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.047 13:14:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58249 /var/tmp/spdk-nbd.sock 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58249 ']' 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.047 13:14:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.047 13:14:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.047 Malloc0 00:04:50.047 13:14:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.305 Malloc1 00:04:50.305 13:14:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.305 13:14:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.563 /dev/nbd0 00:04:50.563 13:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.563 13:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.563 1+0 records in 00:04:50.563 1+0 records out 00:04:50.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312052 s, 13.1 MB/s 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.563 13:14:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.564 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.564 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.564 13:14:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.822 /dev/nbd1 00:04:50.822 13:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.822 13:14:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.822 13:14:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:50.822 13:14:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.822 13:14:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.822 13:14:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.822 13:14:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.823 1+0 records in 00:04:50.823 1+0 records out 00:04:50.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286274 s, 14.3 MB/s 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.823 13:14:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.823 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.823 13:14:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.823 13:14:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.823 13:14:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.823 13:14:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.081 13:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.081 { 00:04:51.081 "nbd_device": "/dev/nbd0", 00:04:51.081 "bdev_name": "Malloc0" 00:04:51.081 }, 00:04:51.081 { 00:04:51.081 "nbd_device": "/dev/nbd1", 00:04:51.081 "bdev_name": "Malloc1" 00:04:51.081 } 00:04:51.081 ]' 00:04:51.081 13:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.081 { 00:04:51.081 "nbd_device": "/dev/nbd0", 00:04:51.081 "bdev_name": "Malloc0" 00:04:51.081 }, 00:04:51.081 { 00:04:51.081 "nbd_device": "/dev/nbd1", 00:04:51.081 "bdev_name": "Malloc1" 00:04:51.082 } 00:04:51.082 ]' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.082 /dev/nbd1' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.082 /dev/nbd1' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.082 256+0 records in 00:04:51.082 256+0 records out 00:04:51.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0064276 s, 163 MB/s 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.082 256+0 records in 00:04:51.082 256+0 records out 00:04:51.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209639 s, 50.0 MB/s 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.082 13:14:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.341 256+0 records in 00:04:51.341 256+0 records out 00:04:51.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023176 s, 45.2 MB/s 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.341 13:14:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.600 13:14:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.859 13:14:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.118 13:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.118 13:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.118 13:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.376 13:14:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.376 13:14:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.635 13:14:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.635 [2024-11-17 13:14:41.784832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.635 [2024-11-17 13:14:41.826444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.635 [2024-11-17 13:14:41.826456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.893 [2024-11-17 13:14:41.881530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.893 [2024-11-17 13:14:41.881628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.893 [2024-11-17 13:14:41.881642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.425 13:14:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58249 /var/tmp/spdk-nbd.sock 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58249 ']' 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.425 13:14:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.992 13:14:44 event.app_repeat -- event/event.sh@39 -- # killprocess 58249 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58249 ']' 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58249 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58249 00:04:55.992 killing process with pid 58249 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58249' 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58249 00:04:55.992 13:14:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58249 00:04:55.992 spdk_app_start is called in Round 0. 00:04:55.992 Shutdown signal received, stop current app iteration 00:04:55.992 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:55.992 spdk_app_start is called in Round 1. 00:04:55.992 Shutdown signal received, stop current app iteration 00:04:55.992 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:55.992 spdk_app_start is called in Round 2. 00:04:55.992 Shutdown signal received, stop current app iteration 00:04:55.992 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:04:55.992 spdk_app_start is called in Round 3. 00:04:55.992 Shutdown signal received, stop current app iteration 00:04:55.992 13:14:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:55.992 13:14:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:55.992 00:04:55.992 real 0m19.249s 00:04:55.992 user 0m44.155s 00:04:55.992 sys 0m2.924s 00:04:55.993 13:14:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.993 ************************************ 00:04:55.993 END TEST app_repeat 00:04:55.993 ************************************ 00:04:55.993 13:14:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.993 13:14:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:55.993 13:14:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:55.993 13:14:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.993 13:14:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.993 13:14:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.993 ************************************ 00:04:55.993 START TEST cpu_locks 00:04:55.993 ************************************ 00:04:55.993 13:14:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:56.252 * Looking for test storage... 00:04:56.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.252 13:14:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.252 --rc genhtml_branch_coverage=1 00:04:56.252 --rc genhtml_function_coverage=1 00:04:56.252 --rc genhtml_legend=1 00:04:56.252 --rc geninfo_all_blocks=1 00:04:56.252 --rc geninfo_unexecuted_blocks=1 00:04:56.252 00:04:56.252 ' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.252 --rc genhtml_branch_coverage=1 00:04:56.252 --rc genhtml_function_coverage=1 00:04:56.252 --rc genhtml_legend=1 00:04:56.252 --rc geninfo_all_blocks=1 00:04:56.252 --rc geninfo_unexecuted_blocks=1 00:04:56.252 00:04:56.252 ' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.252 --rc genhtml_branch_coverage=1 00:04:56.252 --rc genhtml_function_coverage=1 00:04:56.252 --rc genhtml_legend=1 00:04:56.252 --rc geninfo_all_blocks=1 00:04:56.252 --rc geninfo_unexecuted_blocks=1 00:04:56.252 00:04:56.252 ' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.252 --rc genhtml_branch_coverage=1 00:04:56.252 --rc genhtml_function_coverage=1 00:04:56.252 --rc genhtml_legend=1 00:04:56.252 --rc geninfo_all_blocks=1 00:04:56.252 --rc geninfo_unexecuted_blocks=1 00:04:56.252 00:04:56.252 ' 00:04:56.252 13:14:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:56.252 13:14:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:56.252 13:14:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:56.252 13:14:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.252 13:14:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.252 ************************************ 00:04:56.252 START TEST default_locks 00:04:56.252 ************************************ 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58688 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58688 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58688 ']' 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.252 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.252 [2024-11-17 13:14:45.436514] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:56.252 [2024-11-17 13:14:45.437005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ] 00:04:56.511 [2024-11-17 13:14:45.575119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.511 [2024-11-17 13:14:45.617214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.511 [2024-11-17 13:14:45.682245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.770 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.770 13:14:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:56.770 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58688 00:04:56.770 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58688 00:04:56.770 13:14:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58688 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58688 ']' 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58688 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58688 00:04:57.338 killing process with pid 58688 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58688' 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58688 00:04:57.338 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58688 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58688 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58688 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.596 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:57.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.596 ERROR: process (pid: 58688) is no longer running 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58688 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58688 ']' 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58688) - No such process 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.597 ************************************ 00:04:57.597 END TEST default_locks 00:04:57.597 ************************************ 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.597 00:04:57.597 real 0m1.322s 00:04:57.597 user 0m1.313s 00:04:57.597 sys 0m0.489s 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.597 13:14:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 13:14:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.597 13:14:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.597 13:14:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.597 13:14:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.597 ************************************ 00:04:57.597 START TEST default_locks_via_rpc 00:04:57.597 ************************************ 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:57.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58738 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58738 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58738 ']' 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.597 13:14:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.856 [2024-11-17 13:14:46.818106] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:57.856 [2024-11-17 13:14:46.818213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58738 ] 00:04:57.856 [2024-11-17 13:14:46.964867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.856 [2024-11-17 13:14:47.006261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.856 [2024-11-17 13:14:47.073844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58738 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58738 00:04:58.793 13:14:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.052 13:14:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58738 00:04:59.052 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58738 ']' 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58738 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58738 00:04:59.053 killing process with pid 58738 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58738' 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58738 00:04:59.053 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58738 00:04:59.620 ************************************ 00:04:59.620 END TEST default_locks_via_rpc 00:04:59.620 ************************************ 00:04:59.620 00:04:59.620 real 0m1.852s 00:04:59.620 user 0m2.026s 00:04:59.620 sys 0m0.546s 00:04:59.620 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.620 13:14:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.620 13:14:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:59.620 13:14:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.620 13:14:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.620 13:14:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.620 ************************************ 00:04:59.620 START TEST non_locking_app_on_locked_coremask 00:04:59.620 ************************************ 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58788 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58788 /var/tmp/spdk.sock 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58788 ']' 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.621 13:14:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.621 [2024-11-17 13:14:48.709583] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:59.621 [2024-11-17 13:14:48.709672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58788 ] 00:04:59.879 [2024-11-17 13:14:48.848552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.879 [2024-11-17 13:14:48.904913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.879 [2024-11-17 13:14:48.972599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58792 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58792 /var/tmp/spdk2.sock 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58792 ']' 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.137 13:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.137 [2024-11-17 13:14:49.262356] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:00.137 [2024-11-17 13:14:49.262502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58792 ] 00:05:00.395 [2024-11-17 13:14:49.428756] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.395 [2024-11-17 13:14:49.428794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.395 [2024-11-17 13:14:49.528989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.654 [2024-11-17 13:14:49.663665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.225 13:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.225 13:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.225 13:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58788 00:05:01.225 13:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58788 00:05:01.225 13:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58788 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58788 ']' 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58788 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.160 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58788 00:05:02.160 killing process with pid 58788 00:05:02.161 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.161 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.161 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58788' 00:05:02.161 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58788 00:05:02.161 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58788 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58792 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58792 ']' 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58792 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58792 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.097 killing process with pid 58792 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58792' 00:05:03.097 13:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58792 00:05:03.097 13:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58792 00:05:03.357 00:05:03.357 real 0m3.745s 00:05:03.357 user 0m4.147s 00:05:03.357 sys 0m1.108s 00:05:03.357 13:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.357 13:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.357 ************************************ 00:05:03.357 END TEST non_locking_app_on_locked_coremask 00:05:03.357 ************************************ 00:05:03.357 13:14:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.357 13:14:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.357 13:14:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.357 13:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.357 ************************************ 00:05:03.357 START TEST locking_app_on_unlocked_coremask 00:05:03.357 ************************************ 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58859 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58859 /var/tmp/spdk.sock 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58859 ']' 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.357 13:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.357 [2024-11-17 13:14:52.528113] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:03.357 [2024-11-17 13:14:52.528282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58859 ] 00:05:03.616 [2024-11-17 13:14:52.674754] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.616 [2024-11-17 13:14:52.674835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.616 [2024-11-17 13:14:52.725105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.616 [2024-11-17 13:14:52.798013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58873 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58873 /var/tmp/spdk2.sock 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58873 ']' 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.877 13:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.877 [2024-11-17 13:14:53.086753] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:03.877 [2024-11-17 13:14:53.086924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:05:04.138 [2024-11-17 13:14:53.249942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.396 [2024-11-17 13:14:53.375190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.396 [2024-11-17 13:14:53.527099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.963 13:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.963 13:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.963 13:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58873 00:05:04.963 13:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58873 00:05:04.963 13:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58859 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58859 ']' 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58859 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58859 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.900 killing process with pid 58859 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58859' 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58859 00:05:05.900 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58859 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58873 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58873 ']' 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58873 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58873 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.837 killing process with pid 58873 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58873' 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58873 00:05:06.837 13:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58873 00:05:07.096 00:05:07.096 real 0m3.742s 00:05:07.096 user 0m4.126s 00:05:07.096 sys 0m1.168s 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.096 ************************************ 00:05:07.096 END TEST locking_app_on_unlocked_coremask 00:05:07.096 ************************************ 00:05:07.096 13:14:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.096 13:14:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.096 13:14:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.096 13:14:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.096 ************************************ 00:05:07.096 START TEST locking_app_on_locked_coremask 00:05:07.096 ************************************ 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58940 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58940 /var/tmp/spdk.sock 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58940 ']' 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.096 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.355 [2024-11-17 13:14:56.321999] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:07.355 [2024-11-17 13:14:56.322104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58940 ] 00:05:07.355 [2024-11-17 13:14:56.471588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.355 [2024-11-17 13:14:56.525529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.614 [2024-11-17 13:14:56.592493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58949 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58949 /var/tmp/spdk2.sock 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58949 /var/tmp/spdk2.sock 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58949 /var/tmp/spdk2.sock 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58949 ']' 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.614 13:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.872 [2024-11-17 13:14:56.856740] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:07.872 [2024-11-17 13:14:56.856859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58949 ] 00:05:07.872 [2024-11-17 13:14:57.020474] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58940 has claimed it. 00:05:07.872 [2024-11-17 13:14:57.020546] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58949) - No such process 00:05:08.440 ERROR: process (pid: 58949) is no longer running 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58940 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58940 00:05:08.440 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58940 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58940 ']' 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58940 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.008 13:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58940 00:05:09.008 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.008 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.008 killing process with pid 58940 00:05:09.008 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58940' 00:05:09.008 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58940 00:05:09.008 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58940 00:05:09.268 00:05:09.268 real 0m2.124s 00:05:09.268 user 0m2.361s 00:05:09.268 sys 0m0.639s 00:05:09.268 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.268 13:14:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.268 ************************************ 00:05:09.268 END TEST locking_app_on_locked_coremask 00:05:09.268 ************************************ 00:05:09.268 13:14:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.268 13:14:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.268 13:14:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.268 13:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.268 ************************************ 00:05:09.268 START TEST locking_overlapped_coremask 00:05:09.268 ************************************ 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58994 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58994 /var/tmp/spdk.sock 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58994 ']' 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.268 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.527 [2024-11-17 13:14:58.504909] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:09.527 [2024-11-17 13:14:58.505027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58994 ] 00:05:09.527 [2024-11-17 13:14:58.650883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.527 [2024-11-17 13:14:58.707875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.527 [2024-11-17 13:14:58.708009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.527 [2024-11-17 13:14:58.708027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.787 [2024-11-17 13:14:58.776725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59005 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59005 /var/tmp/spdk2.sock 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59005 /var/tmp/spdk2.sock 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59005 /var/tmp/spdk2.sock 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59005 ']' 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.787 13:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.047 [2024-11-17 13:14:59.046772] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:10.047 [2024-11-17 13:14:59.046901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59005 ] 00:05:10.047 [2024-11-17 13:14:59.207396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58994 has claimed it. 00:05:10.047 [2024-11-17 13:14:59.207452] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.614 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59005) - No such process 00:05:10.614 ERROR: process (pid: 59005) is no longer running 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58994 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58994 ']' 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58994 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58994 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58994' 00:05:10.614 killing process with pid 58994 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58994 00:05:10.614 13:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58994 00:05:11.181 00:05:11.181 real 0m1.754s 00:05:11.181 user 0m4.750s 00:05:11.181 sys 0m0.412s 00:05:11.181 13:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.181 13:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 ************************************ 00:05:11.181 END TEST locking_overlapped_coremask 00:05:11.181 ************************************ 00:05:11.181 13:15:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.181 13:15:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.181 13:15:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.181 13:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 ************************************ 00:05:11.181 START TEST locking_overlapped_coremask_via_rpc 00:05:11.181 ************************************ 00:05:11.181 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59050 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59050 /var/tmp/spdk.sock 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59050 ']' 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.182 13:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.182 [2024-11-17 13:15:00.303487] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:11.182 [2024-11-17 13:15:00.303615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:05:11.440 [2024-11-17 13:15:00.450100] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.440 [2024-11-17 13:15:00.450150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.440 [2024-11-17 13:15:00.500151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.440 [2024-11-17 13:15:00.500267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.440 [2024-11-17 13:15:00.500269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.440 [2024-11-17 13:15:00.570304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59068 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59068 /var/tmp/spdk2.sock 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59068 ']' 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.376 13:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.376 [2024-11-17 13:15:01.349059] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:12.376 [2024-11-17 13:15:01.349163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:05:12.376 [2024-11-17 13:15:01.500739] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.376 [2024-11-17 13:15:01.500800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.377 [2024-11-17 13:15:01.593586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.377 [2024-11-17 13:15:01.596897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.377 [2024-11-17 13:15:01.596897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.636 [2024-11-17 13:15:01.730072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.204 request: 00:05:13.204 { 00:05:13.204 [2024-11-17 13:15:02.392917] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59050 has claimed it. 00:05:13.204 "method": "framework_enable_cpumask_locks", 00:05:13.204 "req_id": 1 00:05:13.204 } 00:05:13.204 Got JSON-RPC error response 00:05:13.204 response: 00:05:13.204 { 00:05:13.204 "code": -32603, 00:05:13.204 "message": "Failed to claim CPU core: 2" 00:05:13.204 } 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59050 /var/tmp/spdk.sock 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59050 ']' 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.204 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.205 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59068 /var/tmp/spdk2.sock 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59068 ']' 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.463 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.036 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.036 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.036 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:14.036 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.036 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.037 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.037 00:05:14.037 real 0m2.738s 00:05:14.037 user 0m1.480s 00:05:14.037 sys 0m0.189s 00:05:14.037 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.037 13:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.037 ************************************ 00:05:14.037 END TEST locking_overlapped_coremask_via_rpc 00:05:14.037 ************************************ 00:05:14.037 13:15:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:14.037 13:15:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59050 ]] 00:05:14.037 13:15:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59050 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59050 ']' 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59050 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59050 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.037 killing process with pid 59050 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59050' 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59050 00:05:14.037 13:15:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59050 00:05:14.295 13:15:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59068 ]] 00:05:14.295 13:15:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59068 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59068 ']' 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59068 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59068 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.295 killing process with pid 59068 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59068' 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59068 00:05:14.295 13:15:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59068 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59050 ]] 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59050 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59050 ']' 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59050 00:05:14.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59050) - No such process 00:05:14.862 Process with pid 59050 is not found 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59050 is not found' 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59068 ]] 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59068 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59068 ']' 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59068 00:05:14.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59068) - No such process 00:05:14.862 Process with pid 59068 is not found 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59068 is not found' 00:05:14.862 13:15:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.862 ************************************ 00:05:14.862 END TEST cpu_locks 00:05:14.862 ************************************ 00:05:14.862 00:05:14.862 real 0m18.653s 00:05:14.862 user 0m33.456s 00:05:14.862 sys 0m5.422s 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.862 13:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.862 ************************************ 00:05:14.862 END TEST event 00:05:14.862 ************************************ 00:05:14.862 00:05:14.862 real 0m45.248s 00:05:14.862 user 1m28.097s 00:05:14.862 sys 0m9.087s 00:05:14.862 13:15:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.863 13:15:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.863 13:15:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:14.863 13:15:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.863 13:15:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.863 13:15:03 -- common/autotest_common.sh@10 -- # set +x 00:05:14.863 ************************************ 00:05:14.863 START TEST thread 00:05:14.863 ************************************ 00:05:14.863 13:15:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:14.863 * Looking for test storage... 00:05:14.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:14.863 13:15:03 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.863 13:15:03 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.863 13:15:03 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.863 13:15:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.863 13:15:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.863 13:15:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.863 13:15:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.863 13:15:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.863 13:15:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.863 13:15:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.863 13:15:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.863 13:15:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.863 13:15:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.863 13:15:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.863 13:15:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:14.863 13:15:04 thread -- scripts/common.sh@345 -- # : 1 00:05:14.863 13:15:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.863 13:15:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.863 13:15:04 thread -- scripts/common.sh@365 -- # decimal 1 00:05:14.863 13:15:04 thread -- scripts/common.sh@353 -- # local d=1 00:05:14.863 13:15:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.863 13:15:04 thread -- scripts/common.sh@355 -- # echo 1 00:05:14.863 13:15:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.863 13:15:04 thread -- scripts/common.sh@366 -- # decimal 2 00:05:14.863 13:15:04 thread -- scripts/common.sh@353 -- # local d=2 00:05:14.863 13:15:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.863 13:15:04 thread -- scripts/common.sh@355 -- # echo 2 00:05:14.863 13:15:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.863 13:15:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.863 13:15:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.863 13:15:04 thread -- scripts/common.sh@368 -- # return 0 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.863 --rc genhtml_branch_coverage=1 00:05:14.863 --rc genhtml_function_coverage=1 00:05:14.863 --rc genhtml_legend=1 00:05:14.863 --rc geninfo_all_blocks=1 00:05:14.863 --rc geninfo_unexecuted_blocks=1 00:05:14.863 00:05:14.863 ' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.863 --rc genhtml_branch_coverage=1 00:05:14.863 --rc genhtml_function_coverage=1 00:05:14.863 --rc genhtml_legend=1 00:05:14.863 --rc geninfo_all_blocks=1 00:05:14.863 --rc geninfo_unexecuted_blocks=1 00:05:14.863 00:05:14.863 ' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.863 --rc genhtml_branch_coverage=1 00:05:14.863 --rc genhtml_function_coverage=1 00:05:14.863 --rc genhtml_legend=1 00:05:14.863 --rc geninfo_all_blocks=1 00:05:14.863 --rc geninfo_unexecuted_blocks=1 00:05:14.863 00:05:14.863 ' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.863 --rc genhtml_branch_coverage=1 00:05:14.863 --rc genhtml_function_coverage=1 00:05:14.863 --rc genhtml_legend=1 00:05:14.863 --rc geninfo_all_blocks=1 00:05:14.863 --rc geninfo_unexecuted_blocks=1 00:05:14.863 00:05:14.863 ' 00:05:14.863 13:15:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.863 13:15:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.122 ************************************ 00:05:15.122 START TEST thread_poller_perf 00:05:15.122 ************************************ 00:05:15.122 13:15:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.122 [2024-11-17 13:15:04.100657] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:15.122 [2024-11-17 13:15:04.101075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:15.122 [2024-11-17 13:15:04.232909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.122 [2024-11-17 13:15:04.274504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.122 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.500 [2024-11-17T13:15:05.724Z] ====================================== 00:05:16.500 [2024-11-17T13:15:05.724Z] busy:2210367438 (cyc) 00:05:16.500 [2024-11-17T13:15:05.724Z] total_run_count: 396000 00:05:16.500 [2024-11-17T13:15:05.724Z] tsc_hz: 2200000000 (cyc) 00:05:16.500 [2024-11-17T13:15:05.724Z] ====================================== 00:05:16.500 [2024-11-17T13:15:05.724Z] poller_cost: 5581 (cyc), 2536 (nsec) 00:05:16.500 00:05:16.500 real 0m1.238s 00:05:16.500 user 0m1.096s 00:05:16.500 sys 0m0.035s 00:05:16.500 13:15:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.500 13:15:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.500 ************************************ 00:05:16.500 END TEST thread_poller_perf 00:05:16.500 ************************************ 00:05:16.500 13:15:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.500 13:15:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:16.500 13:15:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.500 13:15:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.500 ************************************ 00:05:16.500 START TEST thread_poller_perf 00:05:16.500 ************************************ 00:05:16.500 13:15:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.500 [2024-11-17 13:15:05.391587] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:16.500 [2024-11-17 13:15:05.391709] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59234 ] 00:05:16.500 [2024-11-17 13:15:05.536599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.500 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.500 [2024-11-17 13:15:05.590377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.435 [2024-11-17T13:15:06.659Z] ====================================== 00:05:17.435 [2024-11-17T13:15:06.659Z] busy:2201638424 (cyc) 00:05:17.435 [2024-11-17T13:15:06.660Z] total_run_count: 5177000 00:05:17.436 [2024-11-17T13:15:06.660Z] tsc_hz: 2200000000 (cyc) 00:05:17.436 [2024-11-17T13:15:06.660Z] ====================================== 00:05:17.436 [2024-11-17T13:15:06.660Z] poller_cost: 425 (cyc), 193 (nsec) 00:05:17.436 00:05:17.436 real 0m1.264s 00:05:17.436 user 0m1.108s 00:05:17.436 sys 0m0.049s 00:05:17.436 13:15:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.436 13:15:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.436 ************************************ 00:05:17.436 END TEST thread_poller_perf 00:05:17.436 ************************************ 00:05:17.694 13:15:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.694 00:05:17.694 real 0m2.769s 00:05:17.694 user 0m2.332s 00:05:17.694 sys 0m0.224s 00:05:17.694 13:15:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.694 13:15:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.694 ************************************ 00:05:17.694 END TEST thread 00:05:17.694 ************************************ 00:05:17.694 13:15:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.694 13:15:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:17.694 13:15:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.694 13:15:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.694 13:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.694 ************************************ 00:05:17.694 START TEST app_cmdline 00:05:17.694 ************************************ 00:05:17.694 13:15:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:17.694 * Looking for test storage... 00:05:17.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:17.694 13:15:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.694 13:15:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.694 13:15:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.694 13:15:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:17.694 13:15:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.695 13:15:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.695 --rc genhtml_branch_coverage=1 00:05:17.695 --rc genhtml_function_coverage=1 00:05:17.695 --rc genhtml_legend=1 00:05:17.695 --rc geninfo_all_blocks=1 00:05:17.695 --rc geninfo_unexecuted_blocks=1 00:05:17.695 00:05:17.695 ' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.695 --rc genhtml_branch_coverage=1 00:05:17.695 --rc genhtml_function_coverage=1 00:05:17.695 --rc genhtml_legend=1 00:05:17.695 --rc geninfo_all_blocks=1 00:05:17.695 --rc geninfo_unexecuted_blocks=1 00:05:17.695 00:05:17.695 ' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.695 --rc genhtml_branch_coverage=1 00:05:17.695 --rc genhtml_function_coverage=1 00:05:17.695 --rc genhtml_legend=1 00:05:17.695 --rc geninfo_all_blocks=1 00:05:17.695 --rc geninfo_unexecuted_blocks=1 00:05:17.695 00:05:17.695 ' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.695 --rc genhtml_branch_coverage=1 00:05:17.695 --rc genhtml_function_coverage=1 00:05:17.695 --rc genhtml_legend=1 00:05:17.695 --rc geninfo_all_blocks=1 00:05:17.695 --rc geninfo_unexecuted_blocks=1 00:05:17.695 00:05:17.695 ' 00:05:17.695 13:15:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:17.695 13:15:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59311 00:05:17.695 13:15:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:17.695 13:15:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59311 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59311 ']' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.695 13:15:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.954 [2024-11-17 13:15:06.984436] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:17.954 [2024-11-17 13:15:06.984577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:05:17.954 [2024-11-17 13:15:07.128443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.213 [2024-11-17 13:15:07.177158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.213 [2024-11-17 13:15:07.244944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:19.149 { 00:05:19.149 "version": "SPDK v25.01-pre git sha1 ca87521f7", 00:05:19.149 "fields": { 00:05:19.149 "major": 25, 00:05:19.149 "minor": 1, 00:05:19.149 "patch": 0, 00:05:19.149 "suffix": "-pre", 00:05:19.149 "commit": "ca87521f7" 00:05:19.149 } 00:05:19.149 } 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:19.149 13:15:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:19.149 13:15:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.408 request: 00:05:19.408 { 00:05:19.408 "method": "env_dpdk_get_mem_stats", 00:05:19.408 "req_id": 1 00:05:19.408 } 00:05:19.408 Got JSON-RPC error response 00:05:19.408 response: 00:05:19.408 { 00:05:19.408 "code": -32601, 00:05:19.408 "message": "Method not found" 00:05:19.408 } 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.408 13:15:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59311 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59311 ']' 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59311 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59311 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59311' 00:05:19.408 killing process with pid 59311 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 59311 00:05:19.408 13:15:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 59311 00:05:19.976 00:05:19.976 real 0m2.249s 00:05:19.976 user 0m2.845s 00:05:19.976 sys 0m0.484s 00:05:19.976 13:15:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.976 13:15:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.976 ************************************ 00:05:19.976 END TEST app_cmdline 00:05:19.976 ************************************ 00:05:19.976 13:15:09 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:19.976 13:15:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.976 13:15:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.976 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:19.976 ************************************ 00:05:19.976 START TEST version 00:05:19.976 ************************************ 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:19.976 * Looking for test storage... 00:05:19.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.976 13:15:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.976 13:15:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.976 13:15:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.976 13:15:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.976 13:15:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.976 13:15:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.976 13:15:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.976 13:15:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.976 13:15:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.976 13:15:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.976 13:15:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.976 13:15:09 version -- scripts/common.sh@344 -- # case "$op" in 00:05:19.976 13:15:09 version -- scripts/common.sh@345 -- # : 1 00:05:19.976 13:15:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.976 13:15:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.976 13:15:09 version -- scripts/common.sh@365 -- # decimal 1 00:05:19.976 13:15:09 version -- scripts/common.sh@353 -- # local d=1 00:05:19.976 13:15:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.976 13:15:09 version -- scripts/common.sh@355 -- # echo 1 00:05:19.976 13:15:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.976 13:15:09 version -- scripts/common.sh@366 -- # decimal 2 00:05:19.976 13:15:09 version -- scripts/common.sh@353 -- # local d=2 00:05:19.976 13:15:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.976 13:15:09 version -- scripts/common.sh@355 -- # echo 2 00:05:19.976 13:15:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.976 13:15:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.976 13:15:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.976 13:15:09 version -- scripts/common.sh@368 -- # return 0 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.976 --rc genhtml_branch_coverage=1 00:05:19.976 --rc genhtml_function_coverage=1 00:05:19.976 --rc genhtml_legend=1 00:05:19.976 --rc geninfo_all_blocks=1 00:05:19.976 --rc geninfo_unexecuted_blocks=1 00:05:19.976 00:05:19.976 ' 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.976 --rc genhtml_branch_coverage=1 00:05:19.976 --rc genhtml_function_coverage=1 00:05:19.976 --rc genhtml_legend=1 00:05:19.976 --rc geninfo_all_blocks=1 00:05:19.976 --rc geninfo_unexecuted_blocks=1 00:05:19.976 00:05:19.976 ' 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.976 --rc genhtml_branch_coverage=1 00:05:19.976 --rc genhtml_function_coverage=1 00:05:19.976 --rc genhtml_legend=1 00:05:19.976 --rc geninfo_all_blocks=1 00:05:19.976 --rc geninfo_unexecuted_blocks=1 00:05:19.976 00:05:19.976 ' 00:05:19.976 13:15:09 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.976 --rc genhtml_branch_coverage=1 00:05:19.976 --rc genhtml_function_coverage=1 00:05:19.976 --rc genhtml_legend=1 00:05:19.976 --rc geninfo_all_blocks=1 00:05:19.976 --rc geninfo_unexecuted_blocks=1 00:05:19.976 00:05:19.976 ' 00:05:19.976 13:15:09 version -- app/version.sh@17 -- # get_header_version major 00:05:19.976 13:15:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:19.976 13:15:09 version -- app/version.sh@14 -- # cut -f2 00:05:19.976 13:15:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.234 13:15:09 version -- app/version.sh@17 -- # major=25 00:05:20.234 13:15:09 version -- app/version.sh@18 -- # get_header_version minor 00:05:20.234 13:15:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # cut -f2 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.234 13:15:09 version -- app/version.sh@18 -- # minor=1 00:05:20.234 13:15:09 version -- app/version.sh@19 -- # get_header_version patch 00:05:20.234 13:15:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # cut -f2 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.234 13:15:09 version -- app/version.sh@19 -- # patch=0 00:05:20.234 13:15:09 version -- app/version.sh@20 -- # get_header_version suffix 00:05:20.234 13:15:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # cut -f2 00:05:20.234 13:15:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.234 13:15:09 version -- app/version.sh@20 -- # suffix=-pre 00:05:20.234 13:15:09 version -- app/version.sh@22 -- # version=25.1 00:05:20.234 13:15:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:20.234 13:15:09 version -- app/version.sh@28 -- # version=25.1rc0 00:05:20.234 13:15:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:20.234 13:15:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:20.234 13:15:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:20.234 13:15:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:20.234 00:05:20.234 real 0m0.237s 00:05:20.234 user 0m0.144s 00:05:20.234 sys 0m0.131s 00:05:20.234 13:15:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.234 13:15:09 version -- common/autotest_common.sh@10 -- # set +x 00:05:20.234 ************************************ 00:05:20.234 END TEST version 00:05:20.234 ************************************ 00:05:20.234 13:15:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:20.234 13:15:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:20.234 13:15:09 -- spdk/autotest.sh@194 -- # uname -s 00:05:20.234 13:15:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:20.234 13:15:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:20.234 13:15:09 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:20.234 13:15:09 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:20.234 13:15:09 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:20.234 13:15:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.234 13:15:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.234 13:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.234 ************************************ 00:05:20.234 START TEST spdk_dd 00:05:20.234 ************************************ 00:05:20.234 13:15:09 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:20.234 * Looking for test storage... 00:05:20.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:20.234 13:15:09 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.234 13:15:09 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.234 13:15:09 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.492 13:15:09 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:20.492 13:15:09 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:20.493 13:15:09 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.493 13:15:09 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.493 --rc genhtml_branch_coverage=1 00:05:20.493 --rc genhtml_function_coverage=1 00:05:20.493 --rc genhtml_legend=1 00:05:20.493 --rc geninfo_all_blocks=1 00:05:20.493 --rc geninfo_unexecuted_blocks=1 00:05:20.493 00:05:20.493 ' 00:05:20.493 13:15:09 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.493 --rc genhtml_branch_coverage=1 00:05:20.493 --rc genhtml_function_coverage=1 00:05:20.493 --rc genhtml_legend=1 00:05:20.493 --rc geninfo_all_blocks=1 00:05:20.493 --rc geninfo_unexecuted_blocks=1 00:05:20.493 00:05:20.493 ' 00:05:20.493 13:15:09 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.493 --rc genhtml_branch_coverage=1 00:05:20.493 --rc genhtml_function_coverage=1 00:05:20.493 --rc genhtml_legend=1 00:05:20.493 --rc geninfo_all_blocks=1 00:05:20.493 --rc geninfo_unexecuted_blocks=1 00:05:20.493 00:05:20.493 ' 00:05:20.493 13:15:09 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.493 --rc genhtml_branch_coverage=1 00:05:20.493 --rc genhtml_function_coverage=1 00:05:20.493 --rc genhtml_legend=1 00:05:20.493 --rc geninfo_all_blocks=1 00:05:20.493 --rc geninfo_unexecuted_blocks=1 00:05:20.493 00:05:20.493 ' 00:05:20.493 13:15:09 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.493 13:15:09 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.493 13:15:09 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.493 13:15:09 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.493 13:15:09 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.493 13:15:09 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:20.493 13:15:09 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.493 13:15:09 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.752 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.752 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.752 13:15:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:20.752 13:15:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:20.752 13:15:09 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:20.752 13:15:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:20.752 13:15:09 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:20.752 13:15:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.013 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:21.014 13:15:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:21.014 * spdk_dd linked to liburing 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:21.014 13:15:10 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:21.015 13:15:10 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:21.015 13:15:10 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:21.015 13:15:10 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:21.015 13:15:10 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:21.015 13:15:10 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:21.015 13:15:10 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:21.015 13:15:10 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:21.015 13:15:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.015 13:15:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.015 13:15:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:21.015 ************************************ 00:05:21.015 START TEST spdk_dd_basic_rw 00:05:21.015 ************************************ 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:21.015 * Looking for test storage... 00:05:21.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.015 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.016 --rc genhtml_branch_coverage=1 00:05:21.016 --rc genhtml_function_coverage=1 00:05:21.016 --rc genhtml_legend=1 00:05:21.016 --rc geninfo_all_blocks=1 00:05:21.016 --rc geninfo_unexecuted_blocks=1 00:05:21.016 00:05:21.016 ' 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.016 --rc genhtml_branch_coverage=1 00:05:21.016 --rc genhtml_function_coverage=1 00:05:21.016 --rc genhtml_legend=1 00:05:21.016 --rc geninfo_all_blocks=1 00:05:21.016 --rc geninfo_unexecuted_blocks=1 00:05:21.016 00:05:21.016 ' 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.016 --rc genhtml_branch_coverage=1 00:05:21.016 --rc genhtml_function_coverage=1 00:05:21.016 --rc genhtml_legend=1 00:05:21.016 --rc geninfo_all_blocks=1 00:05:21.016 --rc geninfo_unexecuted_blocks=1 00:05:21.016 00:05:21.016 ' 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.016 --rc genhtml_branch_coverage=1 00:05:21.016 --rc genhtml_function_coverage=1 00:05:21.016 --rc genhtml_legend=1 00:05:21.016 --rc geninfo_all_blocks=1 00:05:21.016 --rc geninfo_unexecuted_blocks=1 00:05:21.016 00:05:21.016 ' 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:21.016 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:21.278 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:21.278 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.279 ************************************ 00:05:21.279 START TEST dd_bs_lt_native_bs 00:05:21.279 ************************************ 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:21.279 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:21.279 { 00:05:21.279 "subsystems": [ 00:05:21.279 { 00:05:21.279 "subsystem": "bdev", 00:05:21.279 "config": [ 00:05:21.279 { 00:05:21.279 "params": { 00:05:21.279 "trtype": "pcie", 00:05:21.279 "traddr": "0000:00:10.0", 00:05:21.279 "name": "Nvme0" 00:05:21.279 }, 00:05:21.279 "method": "bdev_nvme_attach_controller" 00:05:21.279 }, 00:05:21.279 { 00:05:21.279 "method": "bdev_wait_for_examine" 00:05:21.279 } 00:05:21.279 ] 00:05:21.279 } 00:05:21.279 ] 00:05:21.279 } 00:05:21.279 [2024-11-17 13:15:10.465458] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:21.279 [2024-11-17 13:15:10.465585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:05:21.538 [2024-11-17 13:15:10.603241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.538 [2024-11-17 13:15:10.645306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.538 [2024-11-17 13:15:10.696214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.798 [2024-11-17 13:15:10.800552] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:21.798 [2024-11-17 13:15:10.800638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.798 [2024-11-17 13:15:10.917285] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.798 00:05:21.798 real 0m0.556s 00:05:21.798 user 0m0.375s 00:05:21.798 sys 0m0.139s 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.798 13:15:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:21.798 ************************************ 00:05:21.798 END TEST dd_bs_lt_native_bs 00:05:21.798 ************************************ 00:05:21.798 13:15:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:21.798 13:15:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.798 13:15:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.798 13:15:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.057 ************************************ 00:05:22.057 START TEST dd_rw 00:05:22.057 ************************************ 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:22.057 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.624 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:22.624 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:22.624 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:22.624 13:15:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.624 [2024-11-17 13:15:11.633333] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:22.624 [2024-11-17 13:15:11.633440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:05:22.624 { 00:05:22.624 "subsystems": [ 00:05:22.624 { 00:05:22.624 "subsystem": "bdev", 00:05:22.624 "config": [ 00:05:22.624 { 00:05:22.624 "params": { 00:05:22.624 "trtype": "pcie", 00:05:22.624 "traddr": "0000:00:10.0", 00:05:22.624 "name": "Nvme0" 00:05:22.624 }, 00:05:22.624 "method": "bdev_nvme_attach_controller" 00:05:22.624 }, 00:05:22.624 { 00:05:22.624 "method": "bdev_wait_for_examine" 00:05:22.624 } 00:05:22.624 ] 00:05:22.624 } 00:05:22.624 ] 00:05:22.624 } 00:05:22.624 [2024-11-17 13:15:11.776540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.624 [2024-11-17 13:15:11.817392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.882 [2024-11-17 13:15:11.867874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.882  [2024-11-17T13:15:12.365Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:23.141 00:05:23.141 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:23.141 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:23.141 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:23.141 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:23.141 [2024-11-17 13:15:12.189537] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:23.141 [2024-11-17 13:15:12.189647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:05:23.141 { 00:05:23.141 "subsystems": [ 00:05:23.141 { 00:05:23.141 "subsystem": "bdev", 00:05:23.141 "config": [ 00:05:23.141 { 00:05:23.141 "params": { 00:05:23.141 "trtype": "pcie", 00:05:23.141 "traddr": "0000:00:10.0", 00:05:23.141 "name": "Nvme0" 00:05:23.141 }, 00:05:23.141 "method": "bdev_nvme_attach_controller" 00:05:23.141 }, 00:05:23.141 { 00:05:23.141 "method": "bdev_wait_for_examine" 00:05:23.141 } 00:05:23.141 ] 00:05:23.141 } 00:05:23.141 ] 00:05:23.141 } 00:05:23.141 [2024-11-17 13:15:12.326001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.442 [2024-11-17 13:15:12.373080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.442 [2024-11-17 13:15:12.424123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.442  [2024-11-17T13:15:12.930Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:23.706 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:23.706 13:15:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:23.706 [2024-11-17 13:15:12.753499] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:23.706 [2024-11-17 13:15:12.753592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59728 ] 00:05:23.706 { 00:05:23.706 "subsystems": [ 00:05:23.706 { 00:05:23.706 "subsystem": "bdev", 00:05:23.706 "config": [ 00:05:23.706 { 00:05:23.706 "params": { 00:05:23.706 "trtype": "pcie", 00:05:23.706 "traddr": "0000:00:10.0", 00:05:23.706 "name": "Nvme0" 00:05:23.706 }, 00:05:23.707 "method": "bdev_nvme_attach_controller" 00:05:23.707 }, 00:05:23.707 { 00:05:23.707 "method": "bdev_wait_for_examine" 00:05:23.707 } 00:05:23.707 ] 00:05:23.707 } 00:05:23.707 ] 00:05:23.707 } 00:05:23.707 [2024-11-17 13:15:12.892586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.965 [2024-11-17 13:15:12.935944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.965 [2024-11-17 13:15:12.987800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.965  [2024-11-17T13:15:13.447Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:24.223 00:05:24.223 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:24.223 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:24.223 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:24.224 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:24.224 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:24.224 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:24.224 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:24.790 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:24.790 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.790 13:15:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 { 00:05:24.790 "subsystems": [ 00:05:24.790 { 00:05:24.790 "subsystem": "bdev", 00:05:24.790 "config": [ 00:05:24.790 { 00:05:24.790 "params": { 00:05:24.790 "trtype": "pcie", 00:05:24.790 "traddr": "0000:00:10.0", 00:05:24.790 "name": "Nvme0" 00:05:24.791 }, 00:05:24.791 "method": "bdev_nvme_attach_controller" 00:05:24.791 }, 00:05:24.791 { 00:05:24.791 "method": "bdev_wait_for_examine" 00:05:24.791 } 00:05:24.791 ] 00:05:24.791 } 00:05:24.791 ] 00:05:24.791 } 00:05:24.791 [2024-11-17 13:15:13.983336] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:24.791 [2024-11-17 13:15:13.983469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:05:25.049 [2024-11-17 13:15:14.137097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.049 [2024-11-17 13:15:14.208410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.049 [2024-11-17 13:15:14.266809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.307  [2024-11-17T13:15:14.789Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:25.565 00:05:25.565 13:15:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:25.565 13:15:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:25.565 13:15:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:25.565 13:15:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:25.565 [2024-11-17 13:15:14.618064] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:25.566 [2024-11-17 13:15:14.618163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:05:25.566 { 00:05:25.566 "subsystems": [ 00:05:25.566 { 00:05:25.566 "subsystem": "bdev", 00:05:25.566 "config": [ 00:05:25.566 { 00:05:25.566 "params": { 00:05:25.566 "trtype": "pcie", 00:05:25.566 "traddr": "0000:00:10.0", 00:05:25.566 "name": "Nvme0" 00:05:25.566 }, 00:05:25.566 "method": "bdev_nvme_attach_controller" 00:05:25.566 }, 00:05:25.566 { 00:05:25.566 "method": "bdev_wait_for_examine" 00:05:25.566 } 00:05:25.566 ] 00:05:25.566 } 00:05:25.566 ] 00:05:25.566 } 00:05:25.566 [2024-11-17 13:15:14.765373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.823 [2024-11-17 13:15:14.822967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.823 [2024-11-17 13:15:14.878103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.823  [2024-11-17T13:15:15.305Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:26.081 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:26.081 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.081 { 00:05:26.081 "subsystems": [ 00:05:26.081 { 00:05:26.081 "subsystem": "bdev", 00:05:26.081 "config": [ 00:05:26.081 { 00:05:26.081 "params": { 00:05:26.081 "trtype": "pcie", 00:05:26.081 "traddr": "0000:00:10.0", 00:05:26.081 "name": "Nvme0" 00:05:26.081 }, 00:05:26.081 "method": "bdev_nvme_attach_controller" 00:05:26.081 }, 00:05:26.081 { 00:05:26.081 "method": "bdev_wait_for_examine" 00:05:26.081 } 00:05:26.081 ] 00:05:26.081 } 00:05:26.081 ] 00:05:26.081 } 00:05:26.081 [2024-11-17 13:15:15.238816] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:26.081 [2024-11-17 13:15:15.238941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59782 ] 00:05:26.339 [2024-11-17 13:15:15.387843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.339 [2024-11-17 13:15:15.454548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.339 [2024-11-17 13:15:15.509476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.598  [2024-11-17T13:15:15.822Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:26.598 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:26.598 13:15:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.164 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:27.164 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:27.164 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:27.164 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.423 { 00:05:27.423 "subsystems": [ 00:05:27.423 { 00:05:27.423 "subsystem": "bdev", 00:05:27.423 "config": [ 00:05:27.423 { 00:05:27.423 "params": { 00:05:27.423 "trtype": "pcie", 00:05:27.423 "traddr": "0000:00:10.0", 00:05:27.423 "name": "Nvme0" 00:05:27.423 }, 00:05:27.423 "method": "bdev_nvme_attach_controller" 00:05:27.423 }, 00:05:27.423 { 00:05:27.423 "method": "bdev_wait_for_examine" 00:05:27.423 } 00:05:27.423 ] 00:05:27.423 } 00:05:27.423 ] 00:05:27.423 } 00:05:27.423 [2024-11-17 13:15:16.393129] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:27.423 [2024-11-17 13:15:16.393226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59801 ] 00:05:27.423 [2024-11-17 13:15:16.536692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.423 [2024-11-17 13:15:16.586834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.423 [2024-11-17 13:15:16.638434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.681  [2024-11-17T13:15:17.165Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:27.941 00:05:27.941 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:27.941 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:27.941 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:27.941 13:15:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.941 [2024-11-17 13:15:16.982376] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:27.941 [2024-11-17 13:15:16.982472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:05:27.941 { 00:05:27.941 "subsystems": [ 00:05:27.941 { 00:05:27.941 "subsystem": "bdev", 00:05:27.941 "config": [ 00:05:27.941 { 00:05:27.941 "params": { 00:05:27.941 "trtype": "pcie", 00:05:27.941 "traddr": "0000:00:10.0", 00:05:27.941 "name": "Nvme0" 00:05:27.941 }, 00:05:27.941 "method": "bdev_nvme_attach_controller" 00:05:27.941 }, 00:05:27.941 { 00:05:27.941 "method": "bdev_wait_for_examine" 00:05:27.941 } 00:05:27.941 ] 00:05:27.941 } 00:05:27.941 ] 00:05:27.941 } 00:05:27.941 [2024-11-17 13:15:17.123604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.200 [2024-11-17 13:15:17.172695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.200 [2024-11-17 13:15:17.227853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.200  [2024-11-17T13:15:17.683Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:28.459 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:28.459 13:15:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:28.459 [2024-11-17 13:15:17.583594] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:28.459 [2024-11-17 13:15:17.583712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:05:28.459 { 00:05:28.459 "subsystems": [ 00:05:28.459 { 00:05:28.459 "subsystem": "bdev", 00:05:28.459 "config": [ 00:05:28.459 { 00:05:28.459 "params": { 00:05:28.459 "trtype": "pcie", 00:05:28.459 "traddr": "0000:00:10.0", 00:05:28.459 "name": "Nvme0" 00:05:28.459 }, 00:05:28.459 "method": "bdev_nvme_attach_controller" 00:05:28.459 }, 00:05:28.459 { 00:05:28.459 "method": "bdev_wait_for_examine" 00:05:28.459 } 00:05:28.459 ] 00:05:28.459 } 00:05:28.459 ] 00:05:28.459 } 00:05:28.718 [2024-11-17 13:15:17.731133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.718 [2024-11-17 13:15:17.780410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.718 [2024-11-17 13:15:17.842082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.978  [2024-11-17T13:15:18.202Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:28.978 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:28.978 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.546 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:29.546 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:29.546 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:29.546 13:15:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.546 [2024-11-17 13:15:18.669912] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:29.546 [2024-11-17 13:15:18.670173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:05:29.546 { 00:05:29.546 "subsystems": [ 00:05:29.546 { 00:05:29.546 "subsystem": "bdev", 00:05:29.546 "config": [ 00:05:29.546 { 00:05:29.546 "params": { 00:05:29.546 "trtype": "pcie", 00:05:29.546 "traddr": "0000:00:10.0", 00:05:29.546 "name": "Nvme0" 00:05:29.546 }, 00:05:29.546 "method": "bdev_nvme_attach_controller" 00:05:29.546 }, 00:05:29.546 { 00:05:29.546 "method": "bdev_wait_for_examine" 00:05:29.546 } 00:05:29.546 ] 00:05:29.546 } 00:05:29.546 ] 00:05:29.546 } 00:05:29.804 [2024-11-17 13:15:18.816652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.804 [2024-11-17 13:15:18.880888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.804 [2024-11-17 13:15:18.937836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.062  [2024-11-17T13:15:19.286Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:30.062 00:05:30.062 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:30.062 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:30.062 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.062 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.318 { 00:05:30.318 "subsystems": [ 00:05:30.318 { 00:05:30.318 "subsystem": "bdev", 00:05:30.318 "config": [ 00:05:30.318 { 00:05:30.318 "params": { 00:05:30.318 "trtype": "pcie", 00:05:30.318 "traddr": "0000:00:10.0", 00:05:30.318 "name": "Nvme0" 00:05:30.318 }, 00:05:30.318 "method": "bdev_nvme_attach_controller" 00:05:30.318 }, 00:05:30.318 { 00:05:30.318 "method": "bdev_wait_for_examine" 00:05:30.318 } 00:05:30.318 ] 00:05:30.318 } 00:05:30.318 ] 00:05:30.318 } 00:05:30.318 [2024-11-17 13:15:19.320237] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:30.319 [2024-11-17 13:15:19.320372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:05:30.319 [2024-11-17 13:15:19.477062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.576 [2024-11-17 13:15:19.561969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.576 [2024-11-17 13:15:19.619115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.576  [2024-11-17T13:15:20.058Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:30.834 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.834 13:15:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.834 [2024-11-17 13:15:19.978017] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:30.834 [2024-11-17 13:15:19.978332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:05:30.834 { 00:05:30.834 "subsystems": [ 00:05:30.834 { 00:05:30.834 "subsystem": "bdev", 00:05:30.834 "config": [ 00:05:30.834 { 00:05:30.834 "params": { 00:05:30.834 "trtype": "pcie", 00:05:30.834 "traddr": "0000:00:10.0", 00:05:30.834 "name": "Nvme0" 00:05:30.834 }, 00:05:30.834 "method": "bdev_nvme_attach_controller" 00:05:30.834 }, 00:05:30.834 { 00:05:30.834 "method": "bdev_wait_for_examine" 00:05:30.834 } 00:05:30.834 ] 00:05:30.834 } 00:05:30.834 ] 00:05:30.834 } 00:05:31.092 [2024-11-17 13:15:20.122482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.092 [2024-11-17 13:15:20.186218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.092 [2024-11-17 13:15:20.242702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.350  [2024-11-17T13:15:20.574Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:31.350 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:31.350 13:15:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.916 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:31.916 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:31.916 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:31.916 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.916 { 00:05:31.916 "subsystems": [ 00:05:31.916 { 00:05:31.916 "subsystem": "bdev", 00:05:31.916 "config": [ 00:05:31.916 { 00:05:31.916 "params": { 00:05:31.916 "trtype": "pcie", 00:05:31.916 "traddr": "0000:00:10.0", 00:05:31.916 "name": "Nvme0" 00:05:31.916 }, 00:05:31.916 "method": "bdev_nvme_attach_controller" 00:05:31.916 }, 00:05:31.916 { 00:05:31.916 "method": "bdev_wait_for_examine" 00:05:31.916 } 00:05:31.916 ] 00:05:31.916 } 00:05:31.916 ] 00:05:31.916 } 00:05:31.916 [2024-11-17 13:15:21.097135] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:31.916 [2024-11-17 13:15:21.097422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:05:32.173 [2024-11-17 13:15:21.261962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.173 [2024-11-17 13:15:21.329869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.173 [2024-11-17 13:15:21.383684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.431  [2024-11-17T13:15:21.913Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:32.689 00:05:32.689 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:32.689 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:32.689 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:32.689 13:15:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.689 [2024-11-17 13:15:21.742076] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:32.689 [2024-11-17 13:15:21.742201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:05:32.689 { 00:05:32.689 "subsystems": [ 00:05:32.689 { 00:05:32.689 "subsystem": "bdev", 00:05:32.689 "config": [ 00:05:32.689 { 00:05:32.689 "params": { 00:05:32.689 "trtype": "pcie", 00:05:32.689 "traddr": "0000:00:10.0", 00:05:32.689 "name": "Nvme0" 00:05:32.689 }, 00:05:32.689 "method": "bdev_nvme_attach_controller" 00:05:32.689 }, 00:05:32.689 { 00:05:32.689 "method": "bdev_wait_for_examine" 00:05:32.689 } 00:05:32.689 ] 00:05:32.689 } 00:05:32.689 ] 00:05:32.689 } 00:05:32.689 [2024-11-17 13:15:21.888293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.947 [2024-11-17 13:15:21.938427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.947 [2024-11-17 13:15:21.993687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.947  [2024-11-17T13:15:22.453Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:33.229 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:33.229 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.229 [2024-11-17 13:15:22.350906] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:33.229 [2024-11-17 13:15:22.350998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59937 ] 00:05:33.229 { 00:05:33.229 "subsystems": [ 00:05:33.229 { 00:05:33.229 "subsystem": "bdev", 00:05:33.229 "config": [ 00:05:33.229 { 00:05:33.229 "params": { 00:05:33.229 "trtype": "pcie", 00:05:33.229 "traddr": "0000:00:10.0", 00:05:33.229 "name": "Nvme0" 00:05:33.229 }, 00:05:33.229 "method": "bdev_nvme_attach_controller" 00:05:33.229 }, 00:05:33.229 { 00:05:33.229 "method": "bdev_wait_for_examine" 00:05:33.229 } 00:05:33.229 ] 00:05:33.229 } 00:05:33.229 ] 00:05:33.229 } 00:05:33.488 [2024-11-17 13:15:22.493143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.488 [2024-11-17 13:15:22.536385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.488 [2024-11-17 13:15:22.589248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.488  [2024-11-17T13:15:22.971Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:33.747 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:33.747 13:15:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.315 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:34.315 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:34.315 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.315 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.315 [2024-11-17 13:15:23.328299] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:34.315 [2024-11-17 13:15:23.328551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59955 ] 00:05:34.315 { 00:05:34.315 "subsystems": [ 00:05:34.315 { 00:05:34.315 "subsystem": "bdev", 00:05:34.315 "config": [ 00:05:34.315 { 00:05:34.315 "params": { 00:05:34.315 "trtype": "pcie", 00:05:34.315 "traddr": "0000:00:10.0", 00:05:34.315 "name": "Nvme0" 00:05:34.315 }, 00:05:34.315 "method": "bdev_nvme_attach_controller" 00:05:34.315 }, 00:05:34.315 { 00:05:34.315 "method": "bdev_wait_for_examine" 00:05:34.315 } 00:05:34.315 ] 00:05:34.315 } 00:05:34.315 ] 00:05:34.315 } 00:05:34.315 [2024-11-17 13:15:23.468926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.315 [2024-11-17 13:15:23.513458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.575 [2024-11-17 13:15:23.566011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.575  [2024-11-17T13:15:24.058Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:34.834 00:05:34.834 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:34.834 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:34.834 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.834 13:15:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.834 [2024-11-17 13:15:23.908447] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:34.834 [2024-11-17 13:15:23.908714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59964 ] 00:05:34.834 { 00:05:34.834 "subsystems": [ 00:05:34.834 { 00:05:34.834 "subsystem": "bdev", 00:05:34.834 "config": [ 00:05:34.834 { 00:05:34.834 "params": { 00:05:34.834 "trtype": "pcie", 00:05:34.834 "traddr": "0000:00:10.0", 00:05:34.834 "name": "Nvme0" 00:05:34.834 }, 00:05:34.834 "method": "bdev_nvme_attach_controller" 00:05:34.834 }, 00:05:34.834 { 00:05:34.834 "method": "bdev_wait_for_examine" 00:05:34.834 } 00:05:34.834 ] 00:05:34.834 } 00:05:34.834 ] 00:05:34.834 } 00:05:34.834 [2024-11-17 13:15:24.048738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.093 [2024-11-17 13:15:24.093534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.093 [2024-11-17 13:15:24.150041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.093  [2024-11-17T13:15:24.577Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:35.353 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.353 13:15:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.353 [2024-11-17 13:15:24.503389] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:35.353 [2024-11-17 13:15:24.503479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:05:35.353 { 00:05:35.353 "subsystems": [ 00:05:35.353 { 00:05:35.353 "subsystem": "bdev", 00:05:35.353 "config": [ 00:05:35.353 { 00:05:35.353 "params": { 00:05:35.353 "trtype": "pcie", 00:05:35.353 "traddr": "0000:00:10.0", 00:05:35.353 "name": "Nvme0" 00:05:35.353 }, 00:05:35.353 "method": "bdev_nvme_attach_controller" 00:05:35.353 }, 00:05:35.353 { 00:05:35.353 "method": "bdev_wait_for_examine" 00:05:35.353 } 00:05:35.353 ] 00:05:35.353 } 00:05:35.353 ] 00:05:35.353 } 00:05:35.612 [2024-11-17 13:15:24.644300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.613 [2024-11-17 13:15:24.686749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.613 [2024-11-17 13:15:24.738859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.871  [2024-11-17T13:15:25.095Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:35.871 00:05:35.871 00:05:35.871 real 0m14.010s 00:05:35.871 user 0m10.207s 00:05:35.871 sys 0m5.208s 00:05:35.871 ************************************ 00:05:35.871 END TEST dd_rw 00:05:35.871 ************************************ 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.871 ************************************ 00:05:35.871 START TEST dd_rw_offset 00:05:35.871 ************************************ 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:35.871 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:36.130 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:36.131 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=gdb4jobt7yxtznzwalhtb760tu1qsws321473sws4tpx904ocu2of30nzibe0gonur8e93d75sbg5mbs6340q6oax46dfaj0c59jc7y24rhd7cn6j1h1pv6kqhp95wpj9jhterljymhnh3h3553lf2kyweyvv7ea9tmbzb1lueay2ol9n74va6rjtyvdw8pwrigk2eeky4sus3nf8gj0eyqf7kczgewbhq8nwpr8ir5s670561cqvqjiv0h9gzmeb1c97n78egfztfe6l4lqf7j93kceysdaqp1fshmnuxrj7m1nmwy673q45twi86ug289by57gycu5etoc84mzf6b8xn0yq4a64wwmh7gy903uxaixs2b1d8gmjer5pcvazrh01syyp6zc2qxk9mrroscqwknjj2uif2ig225adojs0riz2l8adabpcf9rlzz45ho4cv0p9amo3b4z3sviy3mlqfdsbv0ov17h88v8ut6hvpbt3oemj0xodd3zaqqhqqq10ovplv167cqezpodkxtcepqmsk1ow7rwvnad0d4ad6r63ay2x6dyrxvm7x6n0k2xpneouzbebp8m0vdulhh9fjd8lnjq99mcvkk7m6974vmje3usp2hg98r45l5ejy8t94xcxuvuupfzc9fdr11f7mbls1bce5x3bzqmnmjftb83iyjti3sevmn4ytpmnl4si06hanqktl7d5xh8oic3qoo7w7lsxlhym8z2e0kvsyjy8oe13jjqafxvlnfer1awpayrdf11cezcks2m90tgnkdlzwe0ksch7xvo0k6g06juacdm8eufwy9005chbda4otfqusg9xl0ivdu1v7jmz0j7xrzimqemd3svhksa600u4yohnel10yqavvmw7eorsgern6xpp7pffglqotjnoxoualogjxz6r341fp3tsrcqbbrtrvu48yax1q4uyi2q5yrmv4dr2v1lvhj5bx1x42358xnex5wh9clxpzmj39rqraqrly20824y17ovd04wzjgl3lhn0dupi1uciqjdv6aesroh49qhznzuxbsrcvuvpt7xvsdaszbyl6w3g5cci3u7jxw3z2fgv0k4t1vzv0s9qfin0t2accbgaaavhccv0lcsaonhwp008hhvo2j25ok1r3filp44vimgmdwhc9vq9muyjz2plyz8a8aze1gdjhm4ba4mn9780kel8su25hf639h90a732b8ldodpuqtnikqz82x1kqse5zhe3kktpvkx3o76gfy8glpjl6bzxoi8nt10hoxu640fzjui9v2w57wnzdmv9t185ws2jhl9xhegzjcvlo4wwa40yk1nsk3g91q3dv2xbbhfbzkpqj7uqsd9318eu5k6ti57f2rjhlye1js7gy370su00wxh8l1n9pb7sszwgwwi56xntpox9kfe8zf1tfx6mwqx2mrrswgowg9hzrxgbxelpydfcp0qqqmuh1cuo022r72j85p2zx8qwj2zh4p0d7g31oxci8j9cckdfdvxjp1g4kvz813hns2z41wfkg4pc1dlteseummdqmwtu9jpvsnmezkje88dynuc8f88xrjad6jcnvioac9r7yevhwnffzrxmwxwoioizc7dx31q4hvsbidc8aqf9yeshh1h3l63wdzyolabud8txud1yd2ybjuy0ct1y1w8ujuswjnbw3r6za2dpgkqy2fmka04upqkjsww2jdxu9ey9xuntv3rxlmmm7yd9i20if77golbflor3rmn77kcw3re29hycu40bbkx8kzfucnkbajuue0okgmvgh32cbxjhanilpjtvdrrcnzhx3qbdv5acwwz7hswb7dua40fkd1sekxn0zpenznhg348gdv4waszefxjs7oh99q527lvtxrzkn2j33r1xn8utygkyj6e3auqeyk377vvf4ywcg8zt3yzps8b6fdy5br04193hwjavrig5cftdvzzpmk6o6wwhjkdmdpq9k2es6kulbumnhjlkpopnptq6l4w0mdtnyci5ev4mw2l8t762xthmcxhdfdyc9lqqdzssr3f2tv3g0lj982z7bdoxk5i22f9bl39x0ugeb86982tx8grmta7omgq3lujeu3iuaq226tows0z39yekh9c10e084l6vo7o4sv3pecui9mch0d1zfhkwu5jrsn8gdvxx860jusfyil73jwstyx42dl3niqquf6qlobmoktwfrez4kt5z6h12ye6k64rrhy8d977kg3xrijyqlyb71llamxurqnftcfdqrt378etc01wki7il0izknnp2ywxbso2r3akviv01tqaqnhtcg3f7mpdpsfmd1gbdclpbmfi7izkj0v67cxpwq27rhiqb7h3bhi8io30z2l2dndszup3eiongaker4fj2x8lq30f9z27313yu39iw930auvdscpwx9uialmvz1qpnzkc6sy6dmms8wv61dahi95oaf0jcwapc9t0dpqliwc6oc4lqd8187sili6t74pu9sxefexsymun3x4l187qr4lkga0mrn2v8a48co0778h1lyor02qpumlz5j5s5qwe53vwknf46zlehcionxiynx0digzks7p3aia5k7gvzyglz07j7o3qbdtqmas8qydl1ciwef18ekd5dz02771b2l09vwaxgene0da04cndwi9va4ud128ft72recmxhn9voarevzx77gwyxyfyfi73g8ulnre1zbs88c8hg8m3oa1kb8xk6f7yvlpxl9uqg2g6j271qii19cye6tvpdosof9sl2zn9y4xfzq0ti6ybjp4kryy34t9wcly2aurncoegnnrkkxjqcgfhcrx87pwwh9drf3acn8cgyc2promjdq7u2cca1pzgwi2ps63c7wo7674m58qmzlc3xt9lbdxis5you0u7bdgsas7dqufc6ap3t0fukfkhzywzppiquisy2w4iga0a60y002idpb931267gxwxxnlt04kd23u04nhnm7b2eqz0i7n39lgjgmzz83w1dpvdiei6vi4333opcbke2smuavpkw9gf7d4krnsnd4divzjwxdyp6aeye5t214cy938krv68hxjkx2hco4z99shrzp5axsekz64b2ucwf4qvkw6b8hk86vv6zbn1l1y2kjlh9ligd0g0cysyvtghcux91nf3e9h52xv8g3xr6w0z5w5vo0lh8da257tn56lzbeysryky2694smuxnym35pz35hc31rr3k9wp8zk5zlg9z31bet5yvpmzf3tswip40msp7ep1gl2dqk3ifgn22dy1ud06ha2i10qevz7u455oui7f4e6fjwmooe0723cvgm4tvfskl6ab94h2g62dwujuiprupnaxuywijocm7ronsan8had36lu2q47tnr3qhdanieu3skfrutjik4r8s7vm9r5dh2lbpc9ehce0dy3avogb9b2w0qugze3hdl675fjuffgjex9eevcznk5hkl7utdbri1lbp8s6q8ukpw9tgrz1ofedyap18yoota1z6j75a27221i6vsfdxt44zub3dgf82ef4f9k3g7r0nevvyas2sv9ucbpn555d7g1gvhv7vdn6y4blkrr6jrs5ztajz8dv2fo9c2f5rjfjj9z741m9ejq5mfw3hte3pymezcm48546809zh5o5ta7m30zztqgw3u2726rhrgomwib5y60jknyh29wfapwit4vo6orotgtitb4n49tgakk12o1ec3h6p9y8my8ukgx3kauzr6n5lqilbtwj52jqyqxxo62xgncummfwzaus1md4rewcdh45q9x0bndavupmabgdnpc07ksio9z5g3p3uy8l9660jwv34rco9ayodg0xmf40mk5newse72j48or96ztb0dolpfct92opybx0asm3n6v2bh3d76u1asvc5zew8bi9ktaz4us4kmix3sylm2nlzmrrn92ii7v8dfhw1fsjzwekmagy0okiu5kj4pjnyy6jzbal99bzfu7rmbt1an4zzszeatpv208e0ydql4v33o8yk3b1kk04f1x3u4g77jy5 00:05:36.131 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:36.131 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:36.131 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:36.131 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:36.131 [2024-11-17 13:15:25.178928] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:36.131 [2024-11-17 13:15:25.179018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:05:36.131 { 00:05:36.131 "subsystems": [ 00:05:36.131 { 00:05:36.131 "subsystem": "bdev", 00:05:36.131 "config": [ 00:05:36.131 { 00:05:36.131 "params": { 00:05:36.131 "trtype": "pcie", 00:05:36.131 "traddr": "0000:00:10.0", 00:05:36.131 "name": "Nvme0" 00:05:36.131 }, 00:05:36.131 "method": "bdev_nvme_attach_controller" 00:05:36.131 }, 00:05:36.131 { 00:05:36.131 "method": "bdev_wait_for_examine" 00:05:36.131 } 00:05:36.131 ] 00:05:36.131 } 00:05:36.131 ] 00:05:36.131 } 00:05:36.131 [2024-11-17 13:15:25.322977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.390 [2024-11-17 13:15:25.371558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.390 [2024-11-17 13:15:25.424436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.390  [2024-11-17T13:15:25.872Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:36.648 00:05:36.648 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:36.648 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:36.649 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:36.649 13:15:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:36.649 { 00:05:36.649 "subsystems": [ 00:05:36.649 { 00:05:36.649 "subsystem": "bdev", 00:05:36.649 "config": [ 00:05:36.649 { 00:05:36.649 "params": { 00:05:36.649 "trtype": "pcie", 00:05:36.649 "traddr": "0000:00:10.0", 00:05:36.649 "name": "Nvme0" 00:05:36.649 }, 00:05:36.649 "method": "bdev_nvme_attach_controller" 00:05:36.649 }, 00:05:36.649 { 00:05:36.649 "method": "bdev_wait_for_examine" 00:05:36.649 } 00:05:36.649 ] 00:05:36.649 } 00:05:36.649 ] 00:05:36.649 } 00:05:36.649 [2024-11-17 13:15:25.773726] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:36.649 [2024-11-17 13:15:25.773860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:05:36.907 [2024-11-17 13:15:25.918389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.907 [2024-11-17 13:15:25.960662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.908 [2024-11-17 13:15:26.013064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.908  [2024-11-17T13:15:26.391Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:37.167 00:05:37.167 13:15:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ gdb4jobt7yxtznzwalhtb760tu1qsws321473sws4tpx904ocu2of30nzibe0gonur8e93d75sbg5mbs6340q6oax46dfaj0c59jc7y24rhd7cn6j1h1pv6kqhp95wpj9jhterljymhnh3h3553lf2kyweyvv7ea9tmbzb1lueay2ol9n74va6rjtyvdw8pwrigk2eeky4sus3nf8gj0eyqf7kczgewbhq8nwpr8ir5s670561cqvqjiv0h9gzmeb1c97n78egfztfe6l4lqf7j93kceysdaqp1fshmnuxrj7m1nmwy673q45twi86ug289by57gycu5etoc84mzf6b8xn0yq4a64wwmh7gy903uxaixs2b1d8gmjer5pcvazrh01syyp6zc2qxk9mrroscqwknjj2uif2ig225adojs0riz2l8adabpcf9rlzz45ho4cv0p9amo3b4z3sviy3mlqfdsbv0ov17h88v8ut6hvpbt3oemj0xodd3zaqqhqqq10ovplv167cqezpodkxtcepqmsk1ow7rwvnad0d4ad6r63ay2x6dyrxvm7x6n0k2xpneouzbebp8m0vdulhh9fjd8lnjq99mcvkk7m6974vmje3usp2hg98r45l5ejy8t94xcxuvuupfzc9fdr11f7mbls1bce5x3bzqmnmjftb83iyjti3sevmn4ytpmnl4si06hanqktl7d5xh8oic3qoo7w7lsxlhym8z2e0kvsyjy8oe13jjqafxvlnfer1awpayrdf11cezcks2m90tgnkdlzwe0ksch7xvo0k6g06juacdm8eufwy9005chbda4otfqusg9xl0ivdu1v7jmz0j7xrzimqemd3svhksa600u4yohnel10yqavvmw7eorsgern6xpp7pffglqotjnoxoualogjxz6r341fp3tsrcqbbrtrvu48yax1q4uyi2q5yrmv4dr2v1lvhj5bx1x42358xnex5wh9clxpzmj39rqraqrly20824y17ovd04wzjgl3lhn0dupi1uciqjdv6aesroh49qhznzuxbsrcvuvpt7xvsdaszbyl6w3g5cci3u7jxw3z2fgv0k4t1vzv0s9qfin0t2accbgaaavhccv0lcsaonhwp008hhvo2j25ok1r3filp44vimgmdwhc9vq9muyjz2plyz8a8aze1gdjhm4ba4mn9780kel8su25hf639h90a732b8ldodpuqtnikqz82x1kqse5zhe3kktpvkx3o76gfy8glpjl6bzxoi8nt10hoxu640fzjui9v2w57wnzdmv9t185ws2jhl9xhegzjcvlo4wwa40yk1nsk3g91q3dv2xbbhfbzkpqj7uqsd9318eu5k6ti57f2rjhlye1js7gy370su00wxh8l1n9pb7sszwgwwi56xntpox9kfe8zf1tfx6mwqx2mrrswgowg9hzrxgbxelpydfcp0qqqmuh1cuo022r72j85p2zx8qwj2zh4p0d7g31oxci8j9cckdfdvxjp1g4kvz813hns2z41wfkg4pc1dlteseummdqmwtu9jpvsnmezkje88dynuc8f88xrjad6jcnvioac9r7yevhwnffzrxmwxwoioizc7dx31q4hvsbidc8aqf9yeshh1h3l63wdzyolabud8txud1yd2ybjuy0ct1y1w8ujuswjnbw3r6za2dpgkqy2fmka04upqkjsww2jdxu9ey9xuntv3rxlmmm7yd9i20if77golbflor3rmn77kcw3re29hycu40bbkx8kzfucnkbajuue0okgmvgh32cbxjhanilpjtvdrrcnzhx3qbdv5acwwz7hswb7dua40fkd1sekxn0zpenznhg348gdv4waszefxjs7oh99q527lvtxrzkn2j33r1xn8utygkyj6e3auqeyk377vvf4ywcg8zt3yzps8b6fdy5br04193hwjavrig5cftdvzzpmk6o6wwhjkdmdpq9k2es6kulbumnhjlkpopnptq6l4w0mdtnyci5ev4mw2l8t762xthmcxhdfdyc9lqqdzssr3f2tv3g0lj982z7bdoxk5i22f9bl39x0ugeb86982tx8grmta7omgq3lujeu3iuaq226tows0z39yekh9c10e084l6vo7o4sv3pecui9mch0d1zfhkwu5jrsn8gdvxx860jusfyil73jwstyx42dl3niqquf6qlobmoktwfrez4kt5z6h12ye6k64rrhy8d977kg3xrijyqlyb71llamxurqnftcfdqrt378etc01wki7il0izknnp2ywxbso2r3akviv01tqaqnhtcg3f7mpdpsfmd1gbdclpbmfi7izkj0v67cxpwq27rhiqb7h3bhi8io30z2l2dndszup3eiongaker4fj2x8lq30f9z27313yu39iw930auvdscpwx9uialmvz1qpnzkc6sy6dmms8wv61dahi95oaf0jcwapc9t0dpqliwc6oc4lqd8187sili6t74pu9sxefexsymun3x4l187qr4lkga0mrn2v8a48co0778h1lyor02qpumlz5j5s5qwe53vwknf46zlehcionxiynx0digzks7p3aia5k7gvzyglz07j7o3qbdtqmas8qydl1ciwef18ekd5dz02771b2l09vwaxgene0da04cndwi9va4ud128ft72recmxhn9voarevzx77gwyxyfyfi73g8ulnre1zbs88c8hg8m3oa1kb8xk6f7yvlpxl9uqg2g6j271qii19cye6tvpdosof9sl2zn9y4xfzq0ti6ybjp4kryy34t9wcly2aurncoegnnrkkxjqcgfhcrx87pwwh9drf3acn8cgyc2promjdq7u2cca1pzgwi2ps63c7wo7674m58qmzlc3xt9lbdxis5you0u7bdgsas7dqufc6ap3t0fukfkhzywzppiquisy2w4iga0a60y002idpb931267gxwxxnlt04kd23u04nhnm7b2eqz0i7n39lgjgmzz83w1dpvdiei6vi4333opcbke2smuavpkw9gf7d4krnsnd4divzjwxdyp6aeye5t214cy938krv68hxjkx2hco4z99shrzp5axsekz64b2ucwf4qvkw6b8hk86vv6zbn1l1y2kjlh9ligd0g0cysyvtghcux91nf3e9h52xv8g3xr6w0z5w5vo0lh8da257tn56lzbeysryky2694smuxnym35pz35hc31rr3k9wp8zk5zlg9z31bet5yvpmzf3tswip40msp7ep1gl2dqk3ifgn22dy1ud06ha2i10qevz7u455oui7f4e6fjwmooe0723cvgm4tvfskl6ab94h2g62dwujuiprupnaxuywijocm7ronsan8had36lu2q47tnr3qhdanieu3skfrutjik4r8s7vm9r5dh2lbpc9ehce0dy3avogb9b2w0qugze3hdl675fjuffgjex9eevcznk5hkl7utdbri1lbp8s6q8ukpw9tgrz1ofedyap18yoota1z6j75a27221i6vsfdxt44zub3dgf82ef4f9k3g7r0nevvyas2sv9ucbpn555d7g1gvhv7vdn6y4blkrr6jrs5ztajz8dv2fo9c2f5rjfjj9z741m9ejq5mfw3hte3pymezcm48546809zh5o5ta7m30zztqgw3u2726rhrgomwib5y60jknyh29wfapwit4vo6orotgtitb4n49tgakk12o1ec3h6p9y8my8ukgx3kauzr6n5lqilbtwj52jqyqxxo62xgncummfwzaus1md4rewcdh45q9x0bndavupmabgdnpc07ksio9z5g3p3uy8l9660jwv34rco9ayodg0xmf40mk5newse72j48or96ztb0dolpfct92opybx0asm3n6v2bh3d76u1asvc5zew8bi9ktaz4us4kmix3sylm2nlzmrrn92ii7v8dfhw1fsjzwekmagy0okiu5kj4pjnyy6jzbal99bzfu7rmbt1an4zzszeatpv208e0ydql4v33o8yk3b1kk04f1x3u4g77jy5 == \g\d\b\4\j\o\b\t\7\y\x\t\z\n\z\w\a\l\h\t\b\7\6\0\t\u\1\q\s\w\s\3\2\1\4\7\3\s\w\s\4\t\p\x\9\0\4\o\c\u\2\o\f\3\0\n\z\i\b\e\0\g\o\n\u\r\8\e\9\3\d\7\5\s\b\g\5\m\b\s\6\3\4\0\q\6\o\a\x\4\6\d\f\a\j\0\c\5\9\j\c\7\y\2\4\r\h\d\7\c\n\6\j\1\h\1\p\v\6\k\q\h\p\9\5\w\p\j\9\j\h\t\e\r\l\j\y\m\h\n\h\3\h\3\5\5\3\l\f\2\k\y\w\e\y\v\v\7\e\a\9\t\m\b\z\b\1\l\u\e\a\y\2\o\l\9\n\7\4\v\a\6\r\j\t\y\v\d\w\8\p\w\r\i\g\k\2\e\e\k\y\4\s\u\s\3\n\f\8\g\j\0\e\y\q\f\7\k\c\z\g\e\w\b\h\q\8\n\w\p\r\8\i\r\5\s\6\7\0\5\6\1\c\q\v\q\j\i\v\0\h\9\g\z\m\e\b\1\c\9\7\n\7\8\e\g\f\z\t\f\e\6\l\4\l\q\f\7\j\9\3\k\c\e\y\s\d\a\q\p\1\f\s\h\m\n\u\x\r\j\7\m\1\n\m\w\y\6\7\3\q\4\5\t\w\i\8\6\u\g\2\8\9\b\y\5\7\g\y\c\u\5\e\t\o\c\8\4\m\z\f\6\b\8\x\n\0\y\q\4\a\6\4\w\w\m\h\7\g\y\9\0\3\u\x\a\i\x\s\2\b\1\d\8\g\m\j\e\r\5\p\c\v\a\z\r\h\0\1\s\y\y\p\6\z\c\2\q\x\k\9\m\r\r\o\s\c\q\w\k\n\j\j\2\u\i\f\2\i\g\2\2\5\a\d\o\j\s\0\r\i\z\2\l\8\a\d\a\b\p\c\f\9\r\l\z\z\4\5\h\o\4\c\v\0\p\9\a\m\o\3\b\4\z\3\s\v\i\y\3\m\l\q\f\d\s\b\v\0\o\v\1\7\h\8\8\v\8\u\t\6\h\v\p\b\t\3\o\e\m\j\0\x\o\d\d\3\z\a\q\q\h\q\q\q\1\0\o\v\p\l\v\1\6\7\c\q\e\z\p\o\d\k\x\t\c\e\p\q\m\s\k\1\o\w\7\r\w\v\n\a\d\0\d\4\a\d\6\r\6\3\a\y\2\x\6\d\y\r\x\v\m\7\x\6\n\0\k\2\x\p\n\e\o\u\z\b\e\b\p\8\m\0\v\d\u\l\h\h\9\f\j\d\8\l\n\j\q\9\9\m\c\v\k\k\7\m\6\9\7\4\v\m\j\e\3\u\s\p\2\h\g\9\8\r\4\5\l\5\e\j\y\8\t\9\4\x\c\x\u\v\u\u\p\f\z\c\9\f\d\r\1\1\f\7\m\b\l\s\1\b\c\e\5\x\3\b\z\q\m\n\m\j\f\t\b\8\3\i\y\j\t\i\3\s\e\v\m\n\4\y\t\p\m\n\l\4\s\i\0\6\h\a\n\q\k\t\l\7\d\5\x\h\8\o\i\c\3\q\o\o\7\w\7\l\s\x\l\h\y\m\8\z\2\e\0\k\v\s\y\j\y\8\o\e\1\3\j\j\q\a\f\x\v\l\n\f\e\r\1\a\w\p\a\y\r\d\f\1\1\c\e\z\c\k\s\2\m\9\0\t\g\n\k\d\l\z\w\e\0\k\s\c\h\7\x\v\o\0\k\6\g\0\6\j\u\a\c\d\m\8\e\u\f\w\y\9\0\0\5\c\h\b\d\a\4\o\t\f\q\u\s\g\9\x\l\0\i\v\d\u\1\v\7\j\m\z\0\j\7\x\r\z\i\m\q\e\m\d\3\s\v\h\k\s\a\6\0\0\u\4\y\o\h\n\e\l\1\0\y\q\a\v\v\m\w\7\e\o\r\s\g\e\r\n\6\x\p\p\7\p\f\f\g\l\q\o\t\j\n\o\x\o\u\a\l\o\g\j\x\z\6\r\3\4\1\f\p\3\t\s\r\c\q\b\b\r\t\r\v\u\4\8\y\a\x\1\q\4\u\y\i\2\q\5\y\r\m\v\4\d\r\2\v\1\l\v\h\j\5\b\x\1\x\4\2\3\5\8\x\n\e\x\5\w\h\9\c\l\x\p\z\m\j\3\9\r\q\r\a\q\r\l\y\2\0\8\2\4\y\1\7\o\v\d\0\4\w\z\j\g\l\3\l\h\n\0\d\u\p\i\1\u\c\i\q\j\d\v\6\a\e\s\r\o\h\4\9\q\h\z\n\z\u\x\b\s\r\c\v\u\v\p\t\7\x\v\s\d\a\s\z\b\y\l\6\w\3\g\5\c\c\i\3\u\7\j\x\w\3\z\2\f\g\v\0\k\4\t\1\v\z\v\0\s\9\q\f\i\n\0\t\2\a\c\c\b\g\a\a\a\v\h\c\c\v\0\l\c\s\a\o\n\h\w\p\0\0\8\h\h\v\o\2\j\2\5\o\k\1\r\3\f\i\l\p\4\4\v\i\m\g\m\d\w\h\c\9\v\q\9\m\u\y\j\z\2\p\l\y\z\8\a\8\a\z\e\1\g\d\j\h\m\4\b\a\4\m\n\9\7\8\0\k\e\l\8\s\u\2\5\h\f\6\3\9\h\9\0\a\7\3\2\b\8\l\d\o\d\p\u\q\t\n\i\k\q\z\8\2\x\1\k\q\s\e\5\z\h\e\3\k\k\t\p\v\k\x\3\o\7\6\g\f\y\8\g\l\p\j\l\6\b\z\x\o\i\8\n\t\1\0\h\o\x\u\6\4\0\f\z\j\u\i\9\v\2\w\5\7\w\n\z\d\m\v\9\t\1\8\5\w\s\2\j\h\l\9\x\h\e\g\z\j\c\v\l\o\4\w\w\a\4\0\y\k\1\n\s\k\3\g\9\1\q\3\d\v\2\x\b\b\h\f\b\z\k\p\q\j\7\u\q\s\d\9\3\1\8\e\u\5\k\6\t\i\5\7\f\2\r\j\h\l\y\e\1\j\s\7\g\y\3\7\0\s\u\0\0\w\x\h\8\l\1\n\9\p\b\7\s\s\z\w\g\w\w\i\5\6\x\n\t\p\o\x\9\k\f\e\8\z\f\1\t\f\x\6\m\w\q\x\2\m\r\r\s\w\g\o\w\g\9\h\z\r\x\g\b\x\e\l\p\y\d\f\c\p\0\q\q\q\m\u\h\1\c\u\o\0\2\2\r\7\2\j\8\5\p\2\z\x\8\q\w\j\2\z\h\4\p\0\d\7\g\3\1\o\x\c\i\8\j\9\c\c\k\d\f\d\v\x\j\p\1\g\4\k\v\z\8\1\3\h\n\s\2\z\4\1\w\f\k\g\4\p\c\1\d\l\t\e\s\e\u\m\m\d\q\m\w\t\u\9\j\p\v\s\n\m\e\z\k\j\e\8\8\d\y\n\u\c\8\f\8\8\x\r\j\a\d\6\j\c\n\v\i\o\a\c\9\r\7\y\e\v\h\w\n\f\f\z\r\x\m\w\x\w\o\i\o\i\z\c\7\d\x\3\1\q\4\h\v\s\b\i\d\c\8\a\q\f\9\y\e\s\h\h\1\h\3\l\6\3\w\d\z\y\o\l\a\b\u\d\8\t\x\u\d\1\y\d\2\y\b\j\u\y\0\c\t\1\y\1\w\8\u\j\u\s\w\j\n\b\w\3\r\6\z\a\2\d\p\g\k\q\y\2\f\m\k\a\0\4\u\p\q\k\j\s\w\w\2\j\d\x\u\9\e\y\9\x\u\n\t\v\3\r\x\l\m\m\m\7\y\d\9\i\2\0\i\f\7\7\g\o\l\b\f\l\o\r\3\r\m\n\7\7\k\c\w\3\r\e\2\9\h\y\c\u\4\0\b\b\k\x\8\k\z\f\u\c\n\k\b\a\j\u\u\e\0\o\k\g\m\v\g\h\3\2\c\b\x\j\h\a\n\i\l\p\j\t\v\d\r\r\c\n\z\h\x\3\q\b\d\v\5\a\c\w\w\z\7\h\s\w\b\7\d\u\a\4\0\f\k\d\1\s\e\k\x\n\0\z\p\e\n\z\n\h\g\3\4\8\g\d\v\4\w\a\s\z\e\f\x\j\s\7\o\h\9\9\q\5\2\7\l\v\t\x\r\z\k\n\2\j\3\3\r\1\x\n\8\u\t\y\g\k\y\j\6\e\3\a\u\q\e\y\k\3\7\7\v\v\f\4\y\w\c\g\8\z\t\3\y\z\p\s\8\b\6\f\d\y\5\b\r\0\4\1\9\3\h\w\j\a\v\r\i\g\5\c\f\t\d\v\z\z\p\m\k\6\o\6\w\w\h\j\k\d\m\d\p\q\9\k\2\e\s\6\k\u\l\b\u\m\n\h\j\l\k\p\o\p\n\p\t\q\6\l\4\w\0\m\d\t\n\y\c\i\5\e\v\4\m\w\2\l\8\t\7\6\2\x\t\h\m\c\x\h\d\f\d\y\c\9\l\q\q\d\z\s\s\r\3\f\2\t\v\3\g\0\l\j\9\8\2\z\7\b\d\o\x\k\5\i\2\2\f\9\b\l\3\9\x\0\u\g\e\b\8\6\9\8\2\t\x\8\g\r\m\t\a\7\o\m\g\q\3\l\u\j\e\u\3\i\u\a\q\2\2\6\t\o\w\s\0\z\3\9\y\e\k\h\9\c\1\0\e\0\8\4\l\6\v\o\7\o\4\s\v\3\p\e\c\u\i\9\m\c\h\0\d\1\z\f\h\k\w\u\5\j\r\s\n\8\g\d\v\x\x\8\6\0\j\u\s\f\y\i\l\7\3\j\w\s\t\y\x\4\2\d\l\3\n\i\q\q\u\f\6\q\l\o\b\m\o\k\t\w\f\r\e\z\4\k\t\5\z\6\h\1\2\y\e\6\k\6\4\r\r\h\y\8\d\9\7\7\k\g\3\x\r\i\j\y\q\l\y\b\7\1\l\l\a\m\x\u\r\q\n\f\t\c\f\d\q\r\t\3\7\8\e\t\c\0\1\w\k\i\7\i\l\0\i\z\k\n\n\p\2\y\w\x\b\s\o\2\r\3\a\k\v\i\v\0\1\t\q\a\q\n\h\t\c\g\3\f\7\m\p\d\p\s\f\m\d\1\g\b\d\c\l\p\b\m\f\i\7\i\z\k\j\0\v\6\7\c\x\p\w\q\2\7\r\h\i\q\b\7\h\3\b\h\i\8\i\o\3\0\z\2\l\2\d\n\d\s\z\u\p\3\e\i\o\n\g\a\k\e\r\4\f\j\2\x\8\l\q\3\0\f\9\z\2\7\3\1\3\y\u\3\9\i\w\9\3\0\a\u\v\d\s\c\p\w\x\9\u\i\a\l\m\v\z\1\q\p\n\z\k\c\6\s\y\6\d\m\m\s\8\w\v\6\1\d\a\h\i\9\5\o\a\f\0\j\c\w\a\p\c\9\t\0\d\p\q\l\i\w\c\6\o\c\4\l\q\d\8\1\8\7\s\i\l\i\6\t\7\4\p\u\9\s\x\e\f\e\x\s\y\m\u\n\3\x\4\l\1\8\7\q\r\4\l\k\g\a\0\m\r\n\2\v\8\a\4\8\c\o\0\7\7\8\h\1\l\y\o\r\0\2\q\p\u\m\l\z\5\j\5\s\5\q\w\e\5\3\v\w\k\n\f\4\6\z\l\e\h\c\i\o\n\x\i\y\n\x\0\d\i\g\z\k\s\7\p\3\a\i\a\5\k\7\g\v\z\y\g\l\z\0\7\j\7\o\3\q\b\d\t\q\m\a\s\8\q\y\d\l\1\c\i\w\e\f\1\8\e\k\d\5\d\z\0\2\7\7\1\b\2\l\0\9\v\w\a\x\g\e\n\e\0\d\a\0\4\c\n\d\w\i\9\v\a\4\u\d\1\2\8\f\t\7\2\r\e\c\m\x\h\n\9\v\o\a\r\e\v\z\x\7\7\g\w\y\x\y\f\y\f\i\7\3\g\8\u\l\n\r\e\1\z\b\s\8\8\c\8\h\g\8\m\3\o\a\1\k\b\8\x\k\6\f\7\y\v\l\p\x\l\9\u\q\g\2\g\6\j\2\7\1\q\i\i\1\9\c\y\e\6\t\v\p\d\o\s\o\f\9\s\l\2\z\n\9\y\4\x\f\z\q\0\t\i\6\y\b\j\p\4\k\r\y\y\3\4\t\9\w\c\l\y\2\a\u\r\n\c\o\e\g\n\n\r\k\k\x\j\q\c\g\f\h\c\r\x\8\7\p\w\w\h\9\d\r\f\3\a\c\n\8\c\g\y\c\2\p\r\o\m\j\d\q\7\u\2\c\c\a\1\p\z\g\w\i\2\p\s\6\3\c\7\w\o\7\6\7\4\m\5\8\q\m\z\l\c\3\x\t\9\l\b\d\x\i\s\5\y\o\u\0\u\7\b\d\g\s\a\s\7\d\q\u\f\c\6\a\p\3\t\0\f\u\k\f\k\h\z\y\w\z\p\p\i\q\u\i\s\y\2\w\4\i\g\a\0\a\6\0\y\0\0\2\i\d\p\b\9\3\1\2\6\7\g\x\w\x\x\n\l\t\0\4\k\d\2\3\u\0\4\n\h\n\m\7\b\2\e\q\z\0\i\7\n\3\9\l\g\j\g\m\z\z\8\3\w\1\d\p\v\d\i\e\i\6\v\i\4\3\3\3\o\p\c\b\k\e\2\s\m\u\a\v\p\k\w\9\g\f\7\d\4\k\r\n\s\n\d\4\d\i\v\z\j\w\x\d\y\p\6\a\e\y\e\5\t\2\1\4\c\y\9\3\8\k\r\v\6\8\h\x\j\k\x\2\h\c\o\4\z\9\9\s\h\r\z\p\5\a\x\s\e\k\z\6\4\b\2\u\c\w\f\4\q\v\k\w\6\b\8\h\k\8\6\v\v\6\z\b\n\1\l\1\y\2\k\j\l\h\9\l\i\g\d\0\g\0\c\y\s\y\v\t\g\h\c\u\x\9\1\n\f\3\e\9\h\5\2\x\v\8\g\3\x\r\6\w\0\z\5\w\5\v\o\0\l\h\8\d\a\2\5\7\t\n\5\6\l\z\b\e\y\s\r\y\k\y\2\6\9\4\s\m\u\x\n\y\m\3\5\p\z\3\5\h\c\3\1\r\r\3\k\9\w\p\8\z\k\5\z\l\g\9\z\3\1\b\e\t\5\y\v\p\m\z\f\3\t\s\w\i\p\4\0\m\s\p\7\e\p\1\g\l\2\d\q\k\3\i\f\g\n\2\2\d\y\1\u\d\0\6\h\a\2\i\1\0\q\e\v\z\7\u\4\5\5\o\u\i\7\f\4\e\6\f\j\w\m\o\o\e\0\7\2\3\c\v\g\m\4\t\v\f\s\k\l\6\a\b\9\4\h\2\g\6\2\d\w\u\j\u\i\p\r\u\p\n\a\x\u\y\w\i\j\o\c\m\7\r\o\n\s\a\n\8\h\a\d\3\6\l\u\2\q\4\7\t\n\r\3\q\h\d\a\n\i\e\u\3\s\k\f\r\u\t\j\i\k\4\r\8\s\7\v\m\9\r\5\d\h\2\l\b\p\c\9\e\h\c\e\0\d\y\3\a\v\o\g\b\9\b\2\w\0\q\u\g\z\e\3\h\d\l\6\7\5\f\j\u\f\f\g\j\e\x\9\e\e\v\c\z\n\k\5\h\k\l\7\u\t\d\b\r\i\1\l\b\p\8\s\6\q\8\u\k\p\w\9\t\g\r\z\1\o\f\e\d\y\a\p\1\8\y\o\o\t\a\1\z\6\j\7\5\a\2\7\2\2\1\i\6\v\s\f\d\x\t\4\4\z\u\b\3\d\g\f\8\2\e\f\4\f\9\k\3\g\7\r\0\n\e\v\v\y\a\s\2\s\v\9\u\c\b\p\n\5\5\5\d\7\g\1\g\v\h\v\7\v\d\n\6\y\4\b\l\k\r\r\6\j\r\s\5\z\t\a\j\z\8\d\v\2\f\o\9\c\2\f\5\r\j\f\j\j\9\z\7\4\1\m\9\e\j\q\5\m\f\w\3\h\t\e\3\p\y\m\e\z\c\m\4\8\5\4\6\8\0\9\z\h\5\o\5\t\a\7\m\3\0\z\z\t\q\g\w\3\u\2\7\2\6\r\h\r\g\o\m\w\i\b\5\y\6\0\j\k\n\y\h\2\9\w\f\a\p\w\i\t\4\v\o\6\o\r\o\t\g\t\i\t\b\4\n\4\9\t\g\a\k\k\1\2\o\1\e\c\3\h\6\p\9\y\8\m\y\8\u\k\g\x\3\k\a\u\z\r\6\n\5\l\q\i\l\b\t\w\j\5\2\j\q\y\q\x\x\o\6\2\x\g\n\c\u\m\m\f\w\z\a\u\s\1\m\d\4\r\e\w\c\d\h\4\5\q\9\x\0\b\n\d\a\v\u\p\m\a\b\g\d\n\p\c\0\7\k\s\i\o\9\z\5\g\3\p\3\u\y\8\l\9\6\6\0\j\w\v\3\4\r\c\o\9\a\y\o\d\g\0\x\m\f\4\0\m\k\5\n\e\w\s\e\7\2\j\4\8\o\r\9\6\z\t\b\0\d\o\l\p\f\c\t\9\2\o\p\y\b\x\0\a\s\m\3\n\6\v\2\b\h\3\d\7\6\u\1\a\s\v\c\5\z\e\w\8\b\i\9\k\t\a\z\4\u\s\4\k\m\i\x\3\s\y\l\m\2\n\l\z\m\r\r\n\9\2\i\i\7\v\8\d\f\h\w\1\f\s\j\z\w\e\k\m\a\g\y\0\o\k\i\u\5\k\j\4\p\j\n\y\y\6\j\z\b\a\l\9\9\b\z\f\u\7\r\m\b\t\1\a\n\4\z\z\s\z\e\a\t\p\v\2\0\8\e\0\y\d\q\l\4\v\3\3\o\8\y\k\3\b\1\k\k\0\4\f\1\x\3\u\4\g\7\7\j\y\5 ]] 00:05:37.168 00:05:37.168 real 0m1.232s 00:05:37.168 user 0m0.825s 00:05:37.168 sys 0m0.560s 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.168 ************************************ 00:05:37.168 END TEST dd_rw_offset 00:05:37.168 ************************************ 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.168 13:15:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 { 00:05:37.427 "subsystems": [ 00:05:37.427 { 00:05:37.427 "subsystem": "bdev", 00:05:37.427 "config": [ 00:05:37.427 { 00:05:37.427 "params": { 00:05:37.427 "trtype": "pcie", 00:05:37.427 "traddr": "0000:00:10.0", 00:05:37.427 "name": "Nvme0" 00:05:37.427 }, 00:05:37.427 "method": "bdev_nvme_attach_controller" 00:05:37.427 }, 00:05:37.427 { 00:05:37.427 "method": "bdev_wait_for_examine" 00:05:37.427 } 00:05:37.427 ] 00:05:37.427 } 00:05:37.427 ] 00:05:37.427 } 00:05:37.427 [2024-11-17 13:15:26.413004] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:37.427 [2024-11-17 13:15:26.413100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:05:37.427 [2024-11-17 13:15:26.557653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.427 [2024-11-17 13:15:26.606411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.687 [2024-11-17 13:15:26.659400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.687  [2024-11-17T13:15:27.169Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:37.946 00:05:37.946 13:15:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:37.946 ************************************ 00:05:37.946 END TEST spdk_dd_basic_rw 00:05:37.946 ************************************ 00:05:37.946 00:05:37.946 real 0m16.916s 00:05:37.946 user 0m12.043s 00:05:37.946 sys 0m6.355s 00:05:37.946 13:15:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.946 13:15:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.946 13:15:26 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:37.946 13:15:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.946 13:15:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.946 13:15:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:37.946 ************************************ 00:05:37.946 START TEST spdk_dd_posix 00:05:37.946 ************************************ 00:05:37.946 13:15:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:37.946 * Looking for test storage... 00:05:37.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:37.946 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.946 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.946 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.205 --rc genhtml_branch_coverage=1 00:05:38.205 --rc genhtml_function_coverage=1 00:05:38.205 --rc genhtml_legend=1 00:05:38.205 --rc geninfo_all_blocks=1 00:05:38.205 --rc geninfo_unexecuted_blocks=1 00:05:38.205 00:05:38.205 ' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.205 --rc genhtml_branch_coverage=1 00:05:38.205 --rc genhtml_function_coverage=1 00:05:38.205 --rc genhtml_legend=1 00:05:38.205 --rc geninfo_all_blocks=1 00:05:38.205 --rc geninfo_unexecuted_blocks=1 00:05:38.205 00:05:38.205 ' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.205 --rc genhtml_branch_coverage=1 00:05:38.205 --rc genhtml_function_coverage=1 00:05:38.205 --rc genhtml_legend=1 00:05:38.205 --rc geninfo_all_blocks=1 00:05:38.205 --rc geninfo_unexecuted_blocks=1 00:05:38.205 00:05:38.205 ' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.205 --rc genhtml_branch_coverage=1 00:05:38.205 --rc genhtml_function_coverage=1 00:05:38.205 --rc genhtml_legend=1 00:05:38.205 --rc geninfo_all_blocks=1 00:05:38.205 --rc geninfo_unexecuted_blocks=1 00:05:38.205 00:05:38.205 ' 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.205 13:15:27 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:38.206 * First test run, liburing in use 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 ************************************ 00:05:38.206 START TEST dd_flag_append 00:05:38.206 ************************************ 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=pg4vgfvq2xw6inua9cos0z6bvfcd5x6x 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=hfq44i1gcmpkewtby3hhen43vgn0fvp3 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s pg4vgfvq2xw6inua9cos0z6bvfcd5x6x 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s hfq44i1gcmpkewtby3hhen43vgn0fvp3 00:05:38.206 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:38.206 [2024-11-17 13:15:27.277065] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:38.206 [2024-11-17 13:15:27.277151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:05:38.206 [2024-11-17 13:15:27.418930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.464 [2024-11-17 13:15:27.463519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.464 [2024-11-17 13:15:27.516161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.464  [2024-11-17T13:15:27.946Z] Copying: 32/32 [B] (average 31 kBps) 00:05:38.722 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ hfq44i1gcmpkewtby3hhen43vgn0fvp3pg4vgfvq2xw6inua9cos0z6bvfcd5x6x == \h\f\q\4\4\i\1\g\c\m\p\k\e\w\t\b\y\3\h\h\e\n\4\3\v\g\n\0\f\v\p\3\p\g\4\v\g\f\v\q\2\x\w\6\i\n\u\a\9\c\o\s\0\z\6\b\v\f\c\d\5\x\6\x ]] 00:05:38.722 00:05:38.722 real 0m0.520s 00:05:38.722 user 0m0.266s 00:05:38.722 sys 0m0.271s 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.722 ************************************ 00:05:38.722 END TEST dd_flag_append 00:05:38.722 ************************************ 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:38.722 ************************************ 00:05:38.722 START TEST dd_flag_directory 00:05:38.722 ************************************ 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:38.722 13:15:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:38.722 [2024-11-17 13:15:27.834494] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:38.722 [2024-11-17 13:15:27.834555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60159 ] 00:05:38.981 [2024-11-17 13:15:27.976237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.981 [2024-11-17 13:15:28.019096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.981 [2024-11-17 13:15:28.070863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.981 [2024-11-17 13:15:28.103272] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:38.981 [2024-11-17 13:15:28.103323] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:38.981 [2024-11-17 13:15:28.103356] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.240 [2024-11-17 13:15:28.215093] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:39.240 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:39.241 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:39.241 [2024-11-17 13:15:28.335457] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:39.241 [2024-11-17 13:15:28.335683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:05:39.500 [2024-11-17 13:15:28.473598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.500 [2024-11-17 13:15:28.516591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.500 [2024-11-17 13:15:28.568559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.500 [2024-11-17 13:15:28.600872] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:39.500 [2024-11-17 13:15:28.600929] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:39.500 [2024-11-17 13:15:28.600963] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.500 [2024-11-17 13:15:28.710977] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:39.759 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:39.759 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.759 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:39.759 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:39.759 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.760 00:05:39.760 real 0m0.996s 00:05:39.760 user 0m0.531s 00:05:39.760 sys 0m0.257s 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:39.760 ************************************ 00:05:39.760 END TEST dd_flag_directory 00:05:39.760 ************************************ 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:39.760 ************************************ 00:05:39.760 START TEST dd_flag_nofollow 00:05:39.760 ************************************ 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:39.760 13:15:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.760 [2024-11-17 13:15:28.894316] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:39.760 [2024-11-17 13:15:28.894420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:05:40.018 [2024-11-17 13:15:29.036432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.018 [2024-11-17 13:15:29.078846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.018 [2024-11-17 13:15:29.131089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.018 [2024-11-17 13:15:29.163981] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:40.018 [2024-11-17 13:15:29.164056] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:40.019 [2024-11-17 13:15:29.164091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.277 [2024-11-17 13:15:29.275341] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:40.277 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:40.278 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:40.278 [2024-11-17 13:15:29.421745] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:40.278 [2024-11-17 13:15:29.421862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:05:40.537 [2024-11-17 13:15:29.571182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.537 [2024-11-17 13:15:29.614084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.537 [2024-11-17 13:15:29.665577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.537 [2024-11-17 13:15:29.698017] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:40.537 [2024-11-17 13:15:29.698073] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:40.537 [2024-11-17 13:15:29.698108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.796 [2024-11-17 13:15:29.807931] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:40.796 13:15:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:40.796 [2024-11-17 13:15:29.940861] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:40.796 [2024-11-17 13:15:29.940955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:05:41.055 [2024-11-17 13:15:30.083939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.056 [2024-11-17 13:15:30.126976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.056 [2024-11-17 13:15:30.178768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.056  [2024-11-17T13:15:30.538Z] Copying: 512/512 [B] (average 500 kBps) 00:05:41.314 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ y3dps1q6p6m2q5gqfnjs2ly6sadvnjuehz1uujtn8l8g4f8tn9g2ybb9y4cobxjqofrzvum08n7gq11lud0lhhvlhl2l9usjiy4cml0ku5rswz1e1e3lwy6bmqycze7zpkjj0wo4b5zq28jrw4ruf3d3kraub4l3kdh9opl02qenuxsff5fzgd1dw74n50oxow7d3o81p9bk1uftemyi0h4us9abquyky0q3g9yb7rwsuwmhkjm2o2cskloqwsvb6qk004q6q4utoee5zuqxocnyvsxq922z2mrlmbltxhia9s44abyt3g19csxu3nfxafj2nxq02rrmfwdjsgc51sx91t6alq3vush2uy5ihmqzokt4tw8ot7gtdyzj5hjkf0oyulme116s0z520u8h5fvjnkqqh8pfrscvmfg9iubhb1ahb8u2t4jxpktxrfn8ubznks8yemjpjjr1oa0pc4ad4dzrupswvpkfcjnqzth6b42019pgwz27gu2sce5p == \y\3\d\p\s\1\q\6\p\6\m\2\q\5\g\q\f\n\j\s\2\l\y\6\s\a\d\v\n\j\u\e\h\z\1\u\u\j\t\n\8\l\8\g\4\f\8\t\n\9\g\2\y\b\b\9\y\4\c\o\b\x\j\q\o\f\r\z\v\u\m\0\8\n\7\g\q\1\1\l\u\d\0\l\h\h\v\l\h\l\2\l\9\u\s\j\i\y\4\c\m\l\0\k\u\5\r\s\w\z\1\e\1\e\3\l\w\y\6\b\m\q\y\c\z\e\7\z\p\k\j\j\0\w\o\4\b\5\z\q\2\8\j\r\w\4\r\u\f\3\d\3\k\r\a\u\b\4\l\3\k\d\h\9\o\p\l\0\2\q\e\n\u\x\s\f\f\5\f\z\g\d\1\d\w\7\4\n\5\0\o\x\o\w\7\d\3\o\8\1\p\9\b\k\1\u\f\t\e\m\y\i\0\h\4\u\s\9\a\b\q\u\y\k\y\0\q\3\g\9\y\b\7\r\w\s\u\w\m\h\k\j\m\2\o\2\c\s\k\l\o\q\w\s\v\b\6\q\k\0\0\4\q\6\q\4\u\t\o\e\e\5\z\u\q\x\o\c\n\y\v\s\x\q\9\2\2\z\2\m\r\l\m\b\l\t\x\h\i\a\9\s\4\4\a\b\y\t\3\g\1\9\c\s\x\u\3\n\f\x\a\f\j\2\n\x\q\0\2\r\r\m\f\w\d\j\s\g\c\5\1\s\x\9\1\t\6\a\l\q\3\v\u\s\h\2\u\y\5\i\h\m\q\z\o\k\t\4\t\w\8\o\t\7\g\t\d\y\z\j\5\h\j\k\f\0\o\y\u\l\m\e\1\1\6\s\0\z\5\2\0\u\8\h\5\f\v\j\n\k\q\q\h\8\p\f\r\s\c\v\m\f\g\9\i\u\b\h\b\1\a\h\b\8\u\2\t\4\j\x\p\k\t\x\r\f\n\8\u\b\z\n\k\s\8\y\e\m\j\p\j\j\r\1\o\a\0\p\c\4\a\d\4\d\z\r\u\p\s\w\v\p\k\f\c\j\n\q\z\t\h\6\b\4\2\0\1\9\p\g\w\z\2\7\g\u\2\s\c\e\5\p ]] 00:05:41.315 00:05:41.315 real 0m1.558s 00:05:41.315 user 0m0.824s 00:05:41.315 sys 0m0.539s 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.315 ************************************ 00:05:41.315 END TEST dd_flag_nofollow 00:05:41.315 ************************************ 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 ************************************ 00:05:41.315 START TEST dd_flag_noatime 00:05:41.315 ************************************ 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731849330 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731849330 00:05:41.315 13:15:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:42.252 13:15:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.511 [2024-11-17 13:15:31.528116] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:42.511 [2024-11-17 13:15:31.528231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:05:42.511 [2024-11-17 13:15:31.679890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.769 [2024-11-17 13:15:31.733936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.769 [2024-11-17 13:15:31.791550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.769  [2024-11-17T13:15:32.252Z] Copying: 512/512 [B] (average 500 kBps) 00:05:43.028 00:05:43.028 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:43.028 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731849330 )) 00:05:43.028 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.028 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731849330 )) 00:05:43.028 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.028 [2024-11-17 13:15:32.061114] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:43.028 [2024-11-17 13:15:32.061223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60270 ] 00:05:43.028 [2024-11-17 13:15:32.197261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.028 [2024-11-17 13:15:32.240751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.287 [2024-11-17 13:15:32.294820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.287  [2024-11-17T13:15:32.770Z] Copying: 512/512 [B] (average 500 kBps) 00:05:43.546 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731849332 )) 00:05:43.546 00:05:43.546 real 0m2.076s 00:05:43.546 user 0m0.577s 00:05:43.546 sys 0m0.548s 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:43.546 ************************************ 00:05:43.546 END TEST dd_flag_noatime 00:05:43.546 ************************************ 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:43.546 ************************************ 00:05:43.546 START TEST dd_flags_misc 00:05:43.546 ************************************ 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:43.546 13:15:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:43.546 [2024-11-17 13:15:32.627817] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:43.546 [2024-11-17 13:15:32.627947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:05:43.546 [2024-11-17 13:15:32.757997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.805 [2024-11-17 13:15:32.801048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.805 [2024-11-17 13:15:32.853035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.805  [2024-11-17T13:15:33.288Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.064 00:05:44.065 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 77i1izseyk209kanu7lsbvsa689s7tlquvw5ojkcumho418u8wiuipvuzbvpzgoas60zkxmjybidkfhxfmuprts6h6vder72ykbact0u0ghutn4i7k08f2r41brfq0dzoqevewolc71quxkeylop5f2ayz7rfyagqy7q4dr6305upqglqiyqi8mjcb0llzbtg5kbehutbd6f45tnbu2rorc9ly65646g7sj53orfdlrvauvfxtbagkka4b1kv91rwvlwn8syceaslmq54hcq5wjac99ptxvlszj7p5x78kvd66o66a753bmo6alf2uoxjhkvh25gqjicue6ue5w5q4d1cgq6yva77jiz2sh17ehy8qvyfgjfg530re1r26omsnn24qg5so0ppk6ruz6n7cdvv97tudnmfuaur0hs7j5beea3ccr9edi0zmjl7jb10lpl8rje1f98y3gcew72l4djxsrdd6ffhd1c92vhwaax9hsp1sd974jxfrdwlugc == \7\7\i\1\i\z\s\e\y\k\2\0\9\k\a\n\u\7\l\s\b\v\s\a\6\8\9\s\7\t\l\q\u\v\w\5\o\j\k\c\u\m\h\o\4\1\8\u\8\w\i\u\i\p\v\u\z\b\v\p\z\g\o\a\s\6\0\z\k\x\m\j\y\b\i\d\k\f\h\x\f\m\u\p\r\t\s\6\h\6\v\d\e\r\7\2\y\k\b\a\c\t\0\u\0\g\h\u\t\n\4\i\7\k\0\8\f\2\r\4\1\b\r\f\q\0\d\z\o\q\e\v\e\w\o\l\c\7\1\q\u\x\k\e\y\l\o\p\5\f\2\a\y\z\7\r\f\y\a\g\q\y\7\q\4\d\r\6\3\0\5\u\p\q\g\l\q\i\y\q\i\8\m\j\c\b\0\l\l\z\b\t\g\5\k\b\e\h\u\t\b\d\6\f\4\5\t\n\b\u\2\r\o\r\c\9\l\y\6\5\6\4\6\g\7\s\j\5\3\o\r\f\d\l\r\v\a\u\v\f\x\t\b\a\g\k\k\a\4\b\1\k\v\9\1\r\w\v\l\w\n\8\s\y\c\e\a\s\l\m\q\5\4\h\c\q\5\w\j\a\c\9\9\p\t\x\v\l\s\z\j\7\p\5\x\7\8\k\v\d\6\6\o\6\6\a\7\5\3\b\m\o\6\a\l\f\2\u\o\x\j\h\k\v\h\2\5\g\q\j\i\c\u\e\6\u\e\5\w\5\q\4\d\1\c\g\q\6\y\v\a\7\7\j\i\z\2\s\h\1\7\e\h\y\8\q\v\y\f\g\j\f\g\5\3\0\r\e\1\r\2\6\o\m\s\n\n\2\4\q\g\5\s\o\0\p\p\k\6\r\u\z\6\n\7\c\d\v\v\9\7\t\u\d\n\m\f\u\a\u\r\0\h\s\7\j\5\b\e\e\a\3\c\c\r\9\e\d\i\0\z\m\j\l\7\j\b\1\0\l\p\l\8\r\j\e\1\f\9\8\y\3\g\c\e\w\7\2\l\4\d\j\x\s\r\d\d\6\f\f\h\d\1\c\9\2\v\h\w\a\a\x\9\h\s\p\1\s\d\9\7\4\j\x\f\r\d\w\l\u\g\c ]] 00:05:44.065 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.065 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:44.065 [2024-11-17 13:15:33.124697] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:44.065 [2024-11-17 13:15:33.124833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60308 ] 00:05:44.065 [2024-11-17 13:15:33.268060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.328 [2024-11-17 13:15:33.317509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.328 [2024-11-17 13:15:33.371100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.328  [2024-11-17T13:15:33.825Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.601 00:05:44.601 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 77i1izseyk209kanu7lsbvsa689s7tlquvw5ojkcumho418u8wiuipvuzbvpzgoas60zkxmjybidkfhxfmuprts6h6vder72ykbact0u0ghutn4i7k08f2r41brfq0dzoqevewolc71quxkeylop5f2ayz7rfyagqy7q4dr6305upqglqiyqi8mjcb0llzbtg5kbehutbd6f45tnbu2rorc9ly65646g7sj53orfdlrvauvfxtbagkka4b1kv91rwvlwn8syceaslmq54hcq5wjac99ptxvlszj7p5x78kvd66o66a753bmo6alf2uoxjhkvh25gqjicue6ue5w5q4d1cgq6yva77jiz2sh17ehy8qvyfgjfg530re1r26omsnn24qg5so0ppk6ruz6n7cdvv97tudnmfuaur0hs7j5beea3ccr9edi0zmjl7jb10lpl8rje1f98y3gcew72l4djxsrdd6ffhd1c92vhwaax9hsp1sd974jxfrdwlugc == \7\7\i\1\i\z\s\e\y\k\2\0\9\k\a\n\u\7\l\s\b\v\s\a\6\8\9\s\7\t\l\q\u\v\w\5\o\j\k\c\u\m\h\o\4\1\8\u\8\w\i\u\i\p\v\u\z\b\v\p\z\g\o\a\s\6\0\z\k\x\m\j\y\b\i\d\k\f\h\x\f\m\u\p\r\t\s\6\h\6\v\d\e\r\7\2\y\k\b\a\c\t\0\u\0\g\h\u\t\n\4\i\7\k\0\8\f\2\r\4\1\b\r\f\q\0\d\z\o\q\e\v\e\w\o\l\c\7\1\q\u\x\k\e\y\l\o\p\5\f\2\a\y\z\7\r\f\y\a\g\q\y\7\q\4\d\r\6\3\0\5\u\p\q\g\l\q\i\y\q\i\8\m\j\c\b\0\l\l\z\b\t\g\5\k\b\e\h\u\t\b\d\6\f\4\5\t\n\b\u\2\r\o\r\c\9\l\y\6\5\6\4\6\g\7\s\j\5\3\o\r\f\d\l\r\v\a\u\v\f\x\t\b\a\g\k\k\a\4\b\1\k\v\9\1\r\w\v\l\w\n\8\s\y\c\e\a\s\l\m\q\5\4\h\c\q\5\w\j\a\c\9\9\p\t\x\v\l\s\z\j\7\p\5\x\7\8\k\v\d\6\6\o\6\6\a\7\5\3\b\m\o\6\a\l\f\2\u\o\x\j\h\k\v\h\2\5\g\q\j\i\c\u\e\6\u\e\5\w\5\q\4\d\1\c\g\q\6\y\v\a\7\7\j\i\z\2\s\h\1\7\e\h\y\8\q\v\y\f\g\j\f\g\5\3\0\r\e\1\r\2\6\o\m\s\n\n\2\4\q\g\5\s\o\0\p\p\k\6\r\u\z\6\n\7\c\d\v\v\9\7\t\u\d\n\m\f\u\a\u\r\0\h\s\7\j\5\b\e\e\a\3\c\c\r\9\e\d\i\0\z\m\j\l\7\j\b\1\0\l\p\l\8\r\j\e\1\f\9\8\y\3\g\c\e\w\7\2\l\4\d\j\x\s\r\d\d\6\f\f\h\d\1\c\9\2\v\h\w\a\a\x\9\h\s\p\1\s\d\9\7\4\j\x\f\r\d\w\l\u\g\c ]] 00:05:44.601 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.601 13:15:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:44.601 [2024-11-17 13:15:33.657113] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:44.601 [2024-11-17 13:15:33.657226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60323 ] 00:05:44.601 [2024-11-17 13:15:33.807964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.860 [2024-11-17 13:15:33.862573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.860 [2024-11-17 13:15:33.918194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.860  [2024-11-17T13:15:34.342Z] Copying: 512/512 [B] (average 125 kBps) 00:05:45.118 00:05:45.118 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 77i1izseyk209kanu7lsbvsa689s7tlquvw5ojkcumho418u8wiuipvuzbvpzgoas60zkxmjybidkfhxfmuprts6h6vder72ykbact0u0ghutn4i7k08f2r41brfq0dzoqevewolc71quxkeylop5f2ayz7rfyagqy7q4dr6305upqglqiyqi8mjcb0llzbtg5kbehutbd6f45tnbu2rorc9ly65646g7sj53orfdlrvauvfxtbagkka4b1kv91rwvlwn8syceaslmq54hcq5wjac99ptxvlszj7p5x78kvd66o66a753bmo6alf2uoxjhkvh25gqjicue6ue5w5q4d1cgq6yva77jiz2sh17ehy8qvyfgjfg530re1r26omsnn24qg5so0ppk6ruz6n7cdvv97tudnmfuaur0hs7j5beea3ccr9edi0zmjl7jb10lpl8rje1f98y3gcew72l4djxsrdd6ffhd1c92vhwaax9hsp1sd974jxfrdwlugc == \7\7\i\1\i\z\s\e\y\k\2\0\9\k\a\n\u\7\l\s\b\v\s\a\6\8\9\s\7\t\l\q\u\v\w\5\o\j\k\c\u\m\h\o\4\1\8\u\8\w\i\u\i\p\v\u\z\b\v\p\z\g\o\a\s\6\0\z\k\x\m\j\y\b\i\d\k\f\h\x\f\m\u\p\r\t\s\6\h\6\v\d\e\r\7\2\y\k\b\a\c\t\0\u\0\g\h\u\t\n\4\i\7\k\0\8\f\2\r\4\1\b\r\f\q\0\d\z\o\q\e\v\e\w\o\l\c\7\1\q\u\x\k\e\y\l\o\p\5\f\2\a\y\z\7\r\f\y\a\g\q\y\7\q\4\d\r\6\3\0\5\u\p\q\g\l\q\i\y\q\i\8\m\j\c\b\0\l\l\z\b\t\g\5\k\b\e\h\u\t\b\d\6\f\4\5\t\n\b\u\2\r\o\r\c\9\l\y\6\5\6\4\6\g\7\s\j\5\3\o\r\f\d\l\r\v\a\u\v\f\x\t\b\a\g\k\k\a\4\b\1\k\v\9\1\r\w\v\l\w\n\8\s\y\c\e\a\s\l\m\q\5\4\h\c\q\5\w\j\a\c\9\9\p\t\x\v\l\s\z\j\7\p\5\x\7\8\k\v\d\6\6\o\6\6\a\7\5\3\b\m\o\6\a\l\f\2\u\o\x\j\h\k\v\h\2\5\g\q\j\i\c\u\e\6\u\e\5\w\5\q\4\d\1\c\g\q\6\y\v\a\7\7\j\i\z\2\s\h\1\7\e\h\y\8\q\v\y\f\g\j\f\g\5\3\0\r\e\1\r\2\6\o\m\s\n\n\2\4\q\g\5\s\o\0\p\p\k\6\r\u\z\6\n\7\c\d\v\v\9\7\t\u\d\n\m\f\u\a\u\r\0\h\s\7\j\5\b\e\e\a\3\c\c\r\9\e\d\i\0\z\m\j\l\7\j\b\1\0\l\p\l\8\r\j\e\1\f\9\8\y\3\g\c\e\w\7\2\l\4\d\j\x\s\r\d\d\6\f\f\h\d\1\c\9\2\v\h\w\a\a\x\9\h\s\p\1\s\d\9\7\4\j\x\f\r\d\w\l\u\g\c ]] 00:05:45.118 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.118 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:45.118 [2024-11-17 13:15:34.197498] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:45.118 [2024-11-17 13:15:34.197594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:05:45.377 [2024-11-17 13:15:34.340897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.377 [2024-11-17 13:15:34.390920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.377 [2024-11-17 13:15:34.444698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.377  [2024-11-17T13:15:34.860Z] Copying: 512/512 [B] (average 500 kBps) 00:05:45.636 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 77i1izseyk209kanu7lsbvsa689s7tlquvw5ojkcumho418u8wiuipvuzbvpzgoas60zkxmjybidkfhxfmuprts6h6vder72ykbact0u0ghutn4i7k08f2r41brfq0dzoqevewolc71quxkeylop5f2ayz7rfyagqy7q4dr6305upqglqiyqi8mjcb0llzbtg5kbehutbd6f45tnbu2rorc9ly65646g7sj53orfdlrvauvfxtbagkka4b1kv91rwvlwn8syceaslmq54hcq5wjac99ptxvlszj7p5x78kvd66o66a753bmo6alf2uoxjhkvh25gqjicue6ue5w5q4d1cgq6yva77jiz2sh17ehy8qvyfgjfg530re1r26omsnn24qg5so0ppk6ruz6n7cdvv97tudnmfuaur0hs7j5beea3ccr9edi0zmjl7jb10lpl8rje1f98y3gcew72l4djxsrdd6ffhd1c92vhwaax9hsp1sd974jxfrdwlugc == \7\7\i\1\i\z\s\e\y\k\2\0\9\k\a\n\u\7\l\s\b\v\s\a\6\8\9\s\7\t\l\q\u\v\w\5\o\j\k\c\u\m\h\o\4\1\8\u\8\w\i\u\i\p\v\u\z\b\v\p\z\g\o\a\s\6\0\z\k\x\m\j\y\b\i\d\k\f\h\x\f\m\u\p\r\t\s\6\h\6\v\d\e\r\7\2\y\k\b\a\c\t\0\u\0\g\h\u\t\n\4\i\7\k\0\8\f\2\r\4\1\b\r\f\q\0\d\z\o\q\e\v\e\w\o\l\c\7\1\q\u\x\k\e\y\l\o\p\5\f\2\a\y\z\7\r\f\y\a\g\q\y\7\q\4\d\r\6\3\0\5\u\p\q\g\l\q\i\y\q\i\8\m\j\c\b\0\l\l\z\b\t\g\5\k\b\e\h\u\t\b\d\6\f\4\5\t\n\b\u\2\r\o\r\c\9\l\y\6\5\6\4\6\g\7\s\j\5\3\o\r\f\d\l\r\v\a\u\v\f\x\t\b\a\g\k\k\a\4\b\1\k\v\9\1\r\w\v\l\w\n\8\s\y\c\e\a\s\l\m\q\5\4\h\c\q\5\w\j\a\c\9\9\p\t\x\v\l\s\z\j\7\p\5\x\7\8\k\v\d\6\6\o\6\6\a\7\5\3\b\m\o\6\a\l\f\2\u\o\x\j\h\k\v\h\2\5\g\q\j\i\c\u\e\6\u\e\5\w\5\q\4\d\1\c\g\q\6\y\v\a\7\7\j\i\z\2\s\h\1\7\e\h\y\8\q\v\y\f\g\j\f\g\5\3\0\r\e\1\r\2\6\o\m\s\n\n\2\4\q\g\5\s\o\0\p\p\k\6\r\u\z\6\n\7\c\d\v\v\9\7\t\u\d\n\m\f\u\a\u\r\0\h\s\7\j\5\b\e\e\a\3\c\c\r\9\e\d\i\0\z\m\j\l\7\j\b\1\0\l\p\l\8\r\j\e\1\f\9\8\y\3\g\c\e\w\7\2\l\4\d\j\x\s\r\d\d\6\f\f\h\d\1\c\9\2\v\h\w\a\a\x\9\h\s\p\1\s\d\9\7\4\j\x\f\r\d\w\l\u\g\c ]] 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.636 13:15:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:45.636 [2024-11-17 13:15:34.727467] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:45.636 [2024-11-17 13:15:34.727553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 00:05:45.895 [2024-11-17 13:15:34.869533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.895 [2024-11-17 13:15:34.921439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.895 [2024-11-17 13:15:34.975268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.895  [2024-11-17T13:15:35.377Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.153 00:05:46.153 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4zjoyquytiavyrlvt0k1md7ww3uml4c2by0xw2k40gms83ja2q72iyg21r4xyc2f86p590qc3y3bq6kvvsp7e5lkw0vt0euz4gb3uirv4mqk3bqlct469rlkcf3lqcau7yefblhdio9zt86uspvuu9shw2rey48ne3y48mdohwgwqflgz1ykjkuzwboxzqfm1o3e17y3jg6k2ixhnpqjv70up29vpetmoz2igax552mcf7qejrmpt4jm119rakp7phipms74nzr0pc1k35zrdzw47rjs9g4br8oqbkvmbzi2zdy71bhmp8sf4om1bc08z91wtu4um5tdczizk92tqiz7khz4mwb49vxn1xxc1h28c03zp1blcwpq4u72kyo499knot6f60jpin3v5ryllhp821so9ggf0kejkxo68fr5n6bvm6yhre4nsi8h6kmt68vv3fd2r230nbrgqb4h8hmp32liqqk9cpnj3ajaczar7ogvm6l5xnyok3z1fd90 == \4\z\j\o\y\q\u\y\t\i\a\v\y\r\l\v\t\0\k\1\m\d\7\w\w\3\u\m\l\4\c\2\b\y\0\x\w\2\k\4\0\g\m\s\8\3\j\a\2\q\7\2\i\y\g\2\1\r\4\x\y\c\2\f\8\6\p\5\9\0\q\c\3\y\3\b\q\6\k\v\v\s\p\7\e\5\l\k\w\0\v\t\0\e\u\z\4\g\b\3\u\i\r\v\4\m\q\k\3\b\q\l\c\t\4\6\9\r\l\k\c\f\3\l\q\c\a\u\7\y\e\f\b\l\h\d\i\o\9\z\t\8\6\u\s\p\v\u\u\9\s\h\w\2\r\e\y\4\8\n\e\3\y\4\8\m\d\o\h\w\g\w\q\f\l\g\z\1\y\k\j\k\u\z\w\b\o\x\z\q\f\m\1\o\3\e\1\7\y\3\j\g\6\k\2\i\x\h\n\p\q\j\v\7\0\u\p\2\9\v\p\e\t\m\o\z\2\i\g\a\x\5\5\2\m\c\f\7\q\e\j\r\m\p\t\4\j\m\1\1\9\r\a\k\p\7\p\h\i\p\m\s\7\4\n\z\r\0\p\c\1\k\3\5\z\r\d\z\w\4\7\r\j\s\9\g\4\b\r\8\o\q\b\k\v\m\b\z\i\2\z\d\y\7\1\b\h\m\p\8\s\f\4\o\m\1\b\c\0\8\z\9\1\w\t\u\4\u\m\5\t\d\c\z\i\z\k\9\2\t\q\i\z\7\k\h\z\4\m\w\b\4\9\v\x\n\1\x\x\c\1\h\2\8\c\0\3\z\p\1\b\l\c\w\p\q\4\u\7\2\k\y\o\4\9\9\k\n\o\t\6\f\6\0\j\p\i\n\3\v\5\r\y\l\l\h\p\8\2\1\s\o\9\g\g\f\0\k\e\j\k\x\o\6\8\f\r\5\n\6\b\v\m\6\y\h\r\e\4\n\s\i\8\h\6\k\m\t\6\8\v\v\3\f\d\2\r\2\3\0\n\b\r\g\q\b\4\h\8\h\m\p\3\2\l\i\q\q\k\9\c\p\n\j\3\a\j\a\c\z\a\r\7\o\g\v\m\6\l\5\x\n\y\o\k\3\z\1\f\d\9\0 ]] 00:05:46.153 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.153 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:46.153 [2024-11-17 13:15:35.254363] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:46.153 [2024-11-17 13:15:35.254459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:05:46.411 [2024-11-17 13:15:35.397674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.411 [2024-11-17 13:15:35.447970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.411 [2024-11-17 13:15:35.501298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.411  [2024-11-17T13:15:35.893Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.670 00:05:46.670 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4zjoyquytiavyrlvt0k1md7ww3uml4c2by0xw2k40gms83ja2q72iyg21r4xyc2f86p590qc3y3bq6kvvsp7e5lkw0vt0euz4gb3uirv4mqk3bqlct469rlkcf3lqcau7yefblhdio9zt86uspvuu9shw2rey48ne3y48mdohwgwqflgz1ykjkuzwboxzqfm1o3e17y3jg6k2ixhnpqjv70up29vpetmoz2igax552mcf7qejrmpt4jm119rakp7phipms74nzr0pc1k35zrdzw47rjs9g4br8oqbkvmbzi2zdy71bhmp8sf4om1bc08z91wtu4um5tdczizk92tqiz7khz4mwb49vxn1xxc1h28c03zp1blcwpq4u72kyo499knot6f60jpin3v5ryllhp821so9ggf0kejkxo68fr5n6bvm6yhre4nsi8h6kmt68vv3fd2r230nbrgqb4h8hmp32liqqk9cpnj3ajaczar7ogvm6l5xnyok3z1fd90 == \4\z\j\o\y\q\u\y\t\i\a\v\y\r\l\v\t\0\k\1\m\d\7\w\w\3\u\m\l\4\c\2\b\y\0\x\w\2\k\4\0\g\m\s\8\3\j\a\2\q\7\2\i\y\g\2\1\r\4\x\y\c\2\f\8\6\p\5\9\0\q\c\3\y\3\b\q\6\k\v\v\s\p\7\e\5\l\k\w\0\v\t\0\e\u\z\4\g\b\3\u\i\r\v\4\m\q\k\3\b\q\l\c\t\4\6\9\r\l\k\c\f\3\l\q\c\a\u\7\y\e\f\b\l\h\d\i\o\9\z\t\8\6\u\s\p\v\u\u\9\s\h\w\2\r\e\y\4\8\n\e\3\y\4\8\m\d\o\h\w\g\w\q\f\l\g\z\1\y\k\j\k\u\z\w\b\o\x\z\q\f\m\1\o\3\e\1\7\y\3\j\g\6\k\2\i\x\h\n\p\q\j\v\7\0\u\p\2\9\v\p\e\t\m\o\z\2\i\g\a\x\5\5\2\m\c\f\7\q\e\j\r\m\p\t\4\j\m\1\1\9\r\a\k\p\7\p\h\i\p\m\s\7\4\n\z\r\0\p\c\1\k\3\5\z\r\d\z\w\4\7\r\j\s\9\g\4\b\r\8\o\q\b\k\v\m\b\z\i\2\z\d\y\7\1\b\h\m\p\8\s\f\4\o\m\1\b\c\0\8\z\9\1\w\t\u\4\u\m\5\t\d\c\z\i\z\k\9\2\t\q\i\z\7\k\h\z\4\m\w\b\4\9\v\x\n\1\x\x\c\1\h\2\8\c\0\3\z\p\1\b\l\c\w\p\q\4\u\7\2\k\y\o\4\9\9\k\n\o\t\6\f\6\0\j\p\i\n\3\v\5\r\y\l\l\h\p\8\2\1\s\o\9\g\g\f\0\k\e\j\k\x\o\6\8\f\r\5\n\6\b\v\m\6\y\h\r\e\4\n\s\i\8\h\6\k\m\t\6\8\v\v\3\f\d\2\r\2\3\0\n\b\r\g\q\b\4\h\8\h\m\p\3\2\l\i\q\q\k\9\c\p\n\j\3\a\j\a\c\z\a\r\7\o\g\v\m\6\l\5\x\n\y\o\k\3\z\1\f\d\9\0 ]] 00:05:46.670 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.670 13:15:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:46.670 [2024-11-17 13:15:35.781636] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:46.670 [2024-11-17 13:15:35.781733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:05:46.928 [2024-11-17 13:15:35.931603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.928 [2024-11-17 13:15:35.984651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.928 [2024-11-17 13:15:36.039942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.928  [2024-11-17T13:15:36.410Z] Copying: 512/512 [B] (average 250 kBps) 00:05:47.186 00:05:47.186 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4zjoyquytiavyrlvt0k1md7ww3uml4c2by0xw2k40gms83ja2q72iyg21r4xyc2f86p590qc3y3bq6kvvsp7e5lkw0vt0euz4gb3uirv4mqk3bqlct469rlkcf3lqcau7yefblhdio9zt86uspvuu9shw2rey48ne3y48mdohwgwqflgz1ykjkuzwboxzqfm1o3e17y3jg6k2ixhnpqjv70up29vpetmoz2igax552mcf7qejrmpt4jm119rakp7phipms74nzr0pc1k35zrdzw47rjs9g4br8oqbkvmbzi2zdy71bhmp8sf4om1bc08z91wtu4um5tdczizk92tqiz7khz4mwb49vxn1xxc1h28c03zp1blcwpq4u72kyo499knot6f60jpin3v5ryllhp821so9ggf0kejkxo68fr5n6bvm6yhre4nsi8h6kmt68vv3fd2r230nbrgqb4h8hmp32liqqk9cpnj3ajaczar7ogvm6l5xnyok3z1fd90 == \4\z\j\o\y\q\u\y\t\i\a\v\y\r\l\v\t\0\k\1\m\d\7\w\w\3\u\m\l\4\c\2\b\y\0\x\w\2\k\4\0\g\m\s\8\3\j\a\2\q\7\2\i\y\g\2\1\r\4\x\y\c\2\f\8\6\p\5\9\0\q\c\3\y\3\b\q\6\k\v\v\s\p\7\e\5\l\k\w\0\v\t\0\e\u\z\4\g\b\3\u\i\r\v\4\m\q\k\3\b\q\l\c\t\4\6\9\r\l\k\c\f\3\l\q\c\a\u\7\y\e\f\b\l\h\d\i\o\9\z\t\8\6\u\s\p\v\u\u\9\s\h\w\2\r\e\y\4\8\n\e\3\y\4\8\m\d\o\h\w\g\w\q\f\l\g\z\1\y\k\j\k\u\z\w\b\o\x\z\q\f\m\1\o\3\e\1\7\y\3\j\g\6\k\2\i\x\h\n\p\q\j\v\7\0\u\p\2\9\v\p\e\t\m\o\z\2\i\g\a\x\5\5\2\m\c\f\7\q\e\j\r\m\p\t\4\j\m\1\1\9\r\a\k\p\7\p\h\i\p\m\s\7\4\n\z\r\0\p\c\1\k\3\5\z\r\d\z\w\4\7\r\j\s\9\g\4\b\r\8\o\q\b\k\v\m\b\z\i\2\z\d\y\7\1\b\h\m\p\8\s\f\4\o\m\1\b\c\0\8\z\9\1\w\t\u\4\u\m\5\t\d\c\z\i\z\k\9\2\t\q\i\z\7\k\h\z\4\m\w\b\4\9\v\x\n\1\x\x\c\1\h\2\8\c\0\3\z\p\1\b\l\c\w\p\q\4\u\7\2\k\y\o\4\9\9\k\n\o\t\6\f\6\0\j\p\i\n\3\v\5\r\y\l\l\h\p\8\2\1\s\o\9\g\g\f\0\k\e\j\k\x\o\6\8\f\r\5\n\6\b\v\m\6\y\h\r\e\4\n\s\i\8\h\6\k\m\t\6\8\v\v\3\f\d\2\r\2\3\0\n\b\r\g\q\b\4\h\8\h\m\p\3\2\l\i\q\q\k\9\c\p\n\j\3\a\j\a\c\z\a\r\7\o\g\v\m\6\l\5\x\n\y\o\k\3\z\1\f\d\9\0 ]] 00:05:47.186 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:47.186 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:47.186 [2024-11-17 13:15:36.310295] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:47.186 [2024-11-17 13:15:36.310394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:05:47.444 [2024-11-17 13:15:36.458700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.444 [2024-11-17 13:15:36.510117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.444 [2024-11-17 13:15:36.565272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.444  [2024-11-17T13:15:36.928Z] Copying: 512/512 [B] (average 166 kBps) 00:05:47.704 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4zjoyquytiavyrlvt0k1md7ww3uml4c2by0xw2k40gms83ja2q72iyg21r4xyc2f86p590qc3y3bq6kvvsp7e5lkw0vt0euz4gb3uirv4mqk3bqlct469rlkcf3lqcau7yefblhdio9zt86uspvuu9shw2rey48ne3y48mdohwgwqflgz1ykjkuzwboxzqfm1o3e17y3jg6k2ixhnpqjv70up29vpetmoz2igax552mcf7qejrmpt4jm119rakp7phipms74nzr0pc1k35zrdzw47rjs9g4br8oqbkvmbzi2zdy71bhmp8sf4om1bc08z91wtu4um5tdczizk92tqiz7khz4mwb49vxn1xxc1h28c03zp1blcwpq4u72kyo499knot6f60jpin3v5ryllhp821so9ggf0kejkxo68fr5n6bvm6yhre4nsi8h6kmt68vv3fd2r230nbrgqb4h8hmp32liqqk9cpnj3ajaczar7ogvm6l5xnyok3z1fd90 == \4\z\j\o\y\q\u\y\t\i\a\v\y\r\l\v\t\0\k\1\m\d\7\w\w\3\u\m\l\4\c\2\b\y\0\x\w\2\k\4\0\g\m\s\8\3\j\a\2\q\7\2\i\y\g\2\1\r\4\x\y\c\2\f\8\6\p\5\9\0\q\c\3\y\3\b\q\6\k\v\v\s\p\7\e\5\l\k\w\0\v\t\0\e\u\z\4\g\b\3\u\i\r\v\4\m\q\k\3\b\q\l\c\t\4\6\9\r\l\k\c\f\3\l\q\c\a\u\7\y\e\f\b\l\h\d\i\o\9\z\t\8\6\u\s\p\v\u\u\9\s\h\w\2\r\e\y\4\8\n\e\3\y\4\8\m\d\o\h\w\g\w\q\f\l\g\z\1\y\k\j\k\u\z\w\b\o\x\z\q\f\m\1\o\3\e\1\7\y\3\j\g\6\k\2\i\x\h\n\p\q\j\v\7\0\u\p\2\9\v\p\e\t\m\o\z\2\i\g\a\x\5\5\2\m\c\f\7\q\e\j\r\m\p\t\4\j\m\1\1\9\r\a\k\p\7\p\h\i\p\m\s\7\4\n\z\r\0\p\c\1\k\3\5\z\r\d\z\w\4\7\r\j\s\9\g\4\b\r\8\o\q\b\k\v\m\b\z\i\2\z\d\y\7\1\b\h\m\p\8\s\f\4\o\m\1\b\c\0\8\z\9\1\w\t\u\4\u\m\5\t\d\c\z\i\z\k\9\2\t\q\i\z\7\k\h\z\4\m\w\b\4\9\v\x\n\1\x\x\c\1\h\2\8\c\0\3\z\p\1\b\l\c\w\p\q\4\u\7\2\k\y\o\4\9\9\k\n\o\t\6\f\6\0\j\p\i\n\3\v\5\r\y\l\l\h\p\8\2\1\s\o\9\g\g\f\0\k\e\j\k\x\o\6\8\f\r\5\n\6\b\v\m\6\y\h\r\e\4\n\s\i\8\h\6\k\m\t\6\8\v\v\3\f\d\2\r\2\3\0\n\b\r\g\q\b\4\h\8\h\m\p\3\2\l\i\q\q\k\9\c\p\n\j\3\a\j\a\c\z\a\r\7\o\g\v\m\6\l\5\x\n\y\o\k\3\z\1\f\d\9\0 ]] 00:05:47.704 00:05:47.704 real 0m4.211s 00:05:47.704 user 0m2.305s 00:05:47.704 sys 0m2.085s 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 ************************************ 00:05:47.704 END TEST dd_flags_misc 00:05:47.704 ************************************ 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:47.704 * Second test run, disabling liburing, forcing AIO 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 ************************************ 00:05:47.704 START TEST dd_flag_append_forced_aio 00:05:47.704 ************************************ 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=z36fexgn2y9df9ookur6htqd0ohr2sj5 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=g7rvfoefo2ik5bxmyhpfr3yg2kck9e2n 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s z36fexgn2y9df9ookur6htqd0ohr2sj5 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s g7rvfoefo2ik5bxmyhpfr3yg2kck9e2n 00:05:47.704 13:15:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:47.704 [2024-11-17 13:15:36.896962] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:47.704 [2024-11-17 13:15:36.897042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:05:47.963 [2024-11-17 13:15:37.040833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.963 [2024-11-17 13:15:37.095705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.963 [2024-11-17 13:15:37.157765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.221  [2024-11-17T13:15:37.445Z] Copying: 32/32 [B] (average 31 kBps) 00:05:48.221 00:05:48.221 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ g7rvfoefo2ik5bxmyhpfr3yg2kck9e2nz36fexgn2y9df9ookur6htqd0ohr2sj5 == \g\7\r\v\f\o\e\f\o\2\i\k\5\b\x\m\y\h\p\f\r\3\y\g\2\k\c\k\9\e\2\n\z\3\6\f\e\x\g\n\2\y\9\d\f\9\o\o\k\u\r\6\h\t\q\d\0\o\h\r\2\s\j\5 ]] 00:05:48.221 00:05:48.221 real 0m0.577s 00:05:48.221 user 0m0.311s 00:05:48.221 sys 0m0.145s 00:05:48.221 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.221 ************************************ 00:05:48.221 END TEST dd_flag_append_forced_aio 00:05:48.221 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:48.221 ************************************ 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:48.480 ************************************ 00:05:48.480 START TEST dd_flag_directory_forced_aio 00:05:48.480 ************************************ 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:48.480 13:15:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:48.480 [2024-11-17 13:15:37.533324] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:48.480 [2024-11-17 13:15:37.533416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:05:48.480 [2024-11-17 13:15:37.683281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.739 [2024-11-17 13:15:37.735326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.739 [2024-11-17 13:15:37.793055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.739 [2024-11-17 13:15:37.833825] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:48.739 [2024-11-17 13:15:37.833911] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:48.739 [2024-11-17 13:15:37.833947] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.997 [2024-11-17 13:15:37.963025] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.997 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.998 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:48.998 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:48.998 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:48.998 [2024-11-17 13:15:38.091737] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:48.998 [2024-11-17 13:15:38.091847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:05:49.257 [2024-11-17 13:15:38.238871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.257 [2024-11-17 13:15:38.296538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.257 [2024-11-17 13:15:38.357416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.257 [2024-11-17 13:15:38.397486] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:49.257 [2024-11-17 13:15:38.397561] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:49.257 [2024-11-17 13:15:38.397611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.516 [2024-11-17 13:15:38.527604] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.516 00:05:49.516 real 0m1.143s 00:05:49.516 user 0m0.618s 00:05:49.516 sys 0m0.315s 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.516 ************************************ 00:05:49.516 END TEST dd_flag_directory_forced_aio 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:49.516 ************************************ 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:49.516 ************************************ 00:05:49.516 START TEST dd_flag_nofollow_forced_aio 00:05:49.516 ************************************ 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:49.516 13:15:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.516 [2024-11-17 13:15:38.725920] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:49.516 [2024-11-17 13:15:38.726050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60466 ] 00:05:49.775 [2024-11-17 13:15:38.874562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.775 [2024-11-17 13:15:38.927405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.775 [2024-11-17 13:15:38.980428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.034 [2024-11-17 13:15:39.014290] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:50.034 [2024-11-17 13:15:39.014362] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:50.034 [2024-11-17 13:15:39.014396] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.034 [2024-11-17 13:15:39.124505] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:50.034 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:50.034 [2024-11-17 13:15:39.233227] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:50.034 [2024-11-17 13:15:39.233311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60475 ] 00:05:50.292 [2024-11-17 13:15:39.371521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.292 [2024-11-17 13:15:39.421482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.292 [2024-11-17 13:15:39.475464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.292 [2024-11-17 13:15:39.509534] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:50.292 [2024-11-17 13:15:39.509595] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:50.292 [2024-11-17 13:15:39.509617] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.550 [2024-11-17 13:15:39.620870] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:50.550 13:15:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.550 [2024-11-17 13:15:39.736889] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:50.550 [2024-11-17 13:15:39.736975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:05:50.809 [2024-11-17 13:15:39.876175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.809 [2024-11-17 13:15:39.929683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.809 [2024-11-17 13:15:39.982707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.809  [2024-11-17T13:15:40.292Z] Copying: 512/512 [B] (average 500 kBps) 00:05:51.068 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3z73h3ffh305hfdphgiymvu2ut06obuo0g5wql3bftol6bdotryhriv9wagv4btga1vp8v5n3m2ssgprga3ohw5p6ai35ful2ntzdw0x9kabntsv812mv4wy59m2m4x05df2umca2s0on6wi4jbt28gkdq0e5oeg0wunp9szogxzsqm3ge7u54hfzk7dckblto1jx43dw17m4tl6scwjrbdnsso3kichavrlxjte4zwnjp25x9ip7bsxt6gotlexq6sssdjwa4j46oj6mgcuo1o1236wbn8e11lrqu4pw00oynx3i04r7xz5yvs9cwgvmqpa5djcojqtw8m7zjcg92v23u6pjyn0e72pez05po5j1z4c3php33l14hg7ila61919y6cckyidnq1c0e2g3brokwyb5ncfybyatnpdt5g7r9z0z7wz2hfdq74msiodsvx99eoiqkivbaj7ycg18wli665cfmajuzawpo0linlthtcsno1maf6jo0a0igc0 == \3\z\7\3\h\3\f\f\h\3\0\5\h\f\d\p\h\g\i\y\m\v\u\2\u\t\0\6\o\b\u\o\0\g\5\w\q\l\3\b\f\t\o\l\6\b\d\o\t\r\y\h\r\i\v\9\w\a\g\v\4\b\t\g\a\1\v\p\8\v\5\n\3\m\2\s\s\g\p\r\g\a\3\o\h\w\5\p\6\a\i\3\5\f\u\l\2\n\t\z\d\w\0\x\9\k\a\b\n\t\s\v\8\1\2\m\v\4\w\y\5\9\m\2\m\4\x\0\5\d\f\2\u\m\c\a\2\s\0\o\n\6\w\i\4\j\b\t\2\8\g\k\d\q\0\e\5\o\e\g\0\w\u\n\p\9\s\z\o\g\x\z\s\q\m\3\g\e\7\u\5\4\h\f\z\k\7\d\c\k\b\l\t\o\1\j\x\4\3\d\w\1\7\m\4\t\l\6\s\c\w\j\r\b\d\n\s\s\o\3\k\i\c\h\a\v\r\l\x\j\t\e\4\z\w\n\j\p\2\5\x\9\i\p\7\b\s\x\t\6\g\o\t\l\e\x\q\6\s\s\s\d\j\w\a\4\j\4\6\o\j\6\m\g\c\u\o\1\o\1\2\3\6\w\b\n\8\e\1\1\l\r\q\u\4\p\w\0\0\o\y\n\x\3\i\0\4\r\7\x\z\5\y\v\s\9\c\w\g\v\m\q\p\a\5\d\j\c\o\j\q\t\w\8\m\7\z\j\c\g\9\2\v\2\3\u\6\p\j\y\n\0\e\7\2\p\e\z\0\5\p\o\5\j\1\z\4\c\3\p\h\p\3\3\l\1\4\h\g\7\i\l\a\6\1\9\1\9\y\6\c\c\k\y\i\d\n\q\1\c\0\e\2\g\3\b\r\o\k\w\y\b\5\n\c\f\y\b\y\a\t\n\p\d\t\5\g\7\r\9\z\0\z\7\w\z\2\h\f\d\q\7\4\m\s\i\o\d\s\v\x\9\9\e\o\i\q\k\i\v\b\a\j\7\y\c\g\1\8\w\l\i\6\6\5\c\f\m\a\j\u\z\a\w\p\o\0\l\i\n\l\t\h\t\c\s\n\o\1\m\a\f\6\j\o\0\a\0\i\g\c\0 ]] 00:05:51.068 00:05:51.068 real 0m1.553s 00:05:51.068 user 0m0.838s 00:05:51.068 sys 0m0.390s 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.068 ************************************ 00:05:51.068 END TEST dd_flag_nofollow_forced_aio 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:51.068 ************************************ 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:51.068 ************************************ 00:05:51.068 START TEST dd_flag_noatime_forced_aio 00:05:51.068 ************************************ 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731849340 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731849340 00:05:51.068 13:15:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:52.444 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.444 [2024-11-17 13:15:41.345394] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:52.444 [2024-11-17 13:15:41.345487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ] 00:05:52.444 [2024-11-17 13:15:41.499147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.444 [2024-11-17 13:15:41.558183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.444 [2024-11-17 13:15:41.615226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.444  [2024-11-17T13:15:41.927Z] Copying: 512/512 [B] (average 500 kBps) 00:05:52.703 00:05:52.703 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:52.703 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731849340 )) 00:05:52.703 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.703 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731849340 )) 00:05:52.703 13:15:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.703 [2024-11-17 13:15:41.907411] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:52.703 [2024-11-17 13:15:41.907518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:05:52.963 [2024-11-17 13:15:42.056001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.963 [2024-11-17 13:15:42.109126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.963 [2024-11-17 13:15:42.164112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.222  [2024-11-17T13:15:42.446Z] Copying: 512/512 [B] (average 500 kBps) 00:05:53.222 00:05:53.222 13:15:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:53.222 13:15:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731849342 )) 00:05:53.222 00:05:53.222 real 0m2.138s 00:05:53.222 user 0m0.591s 00:05:53.222 sys 0m0.302s 00:05:53.222 13:15:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.222 ************************************ 00:05:53.222 END TEST dd_flag_noatime_forced_aio 00:05:53.222 13:15:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:53.222 ************************************ 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:53.481 ************************************ 00:05:53.481 START TEST dd_flags_misc_forced_aio 00:05:53.481 ************************************ 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:53.481 13:15:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:53.481 [2024-11-17 13:15:42.519596] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:53.481 [2024-11-17 13:15:42.519692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60561 ] 00:05:53.481 [2024-11-17 13:15:42.672749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.740 [2024-11-17 13:15:42.735988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.740 [2024-11-17 13:15:42.796715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.740  [2024-11-17T13:15:43.223Z] Copying: 512/512 [B] (average 500 kBps) 00:05:53.999 00:05:53.999 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7f23vogdb2pqzm5s862avkb57nvc3dycb5pkbiz2j9195t8dw0lszrve1mfduie4e6jmr4x3k18lpkh5737eb86a9ekag5ig0xlwhyqqnx21pnb50oae19xglj4r82pdlzdm7phdy3qsj6qwx3r130ita9nysac55fbgovzdlnzy50wsz43kzagphihkc3wnays89txuyksfiqzgfjb87syijiwtavatrxh9s4ra10k7jfzp5pusfbn5xy04jfx5856i3ei14ufxz1c6b68vla264ox3ffmiyr11vj6y14e3hyafn1hxneyvh2q2605p014flsk463jg7rmbu8hzk8p1hc137qfifx1a69kz1wovgv4kiwk1t5p4c1lbnfh49dfdz62zdmkg8f9rv9fbonszvney0pxsiptu037g3r26bigz1rgzavd9zlln4nfnvkjcokyl14wdasy0t383nsia7i8qhxcq8hjofng9nbs02s4lp5fhqh1kzrlpflir == \7\f\2\3\v\o\g\d\b\2\p\q\z\m\5\s\8\6\2\a\v\k\b\5\7\n\v\c\3\d\y\c\b\5\p\k\b\i\z\2\j\9\1\9\5\t\8\d\w\0\l\s\z\r\v\e\1\m\f\d\u\i\e\4\e\6\j\m\r\4\x\3\k\1\8\l\p\k\h\5\7\3\7\e\b\8\6\a\9\e\k\a\g\5\i\g\0\x\l\w\h\y\q\q\n\x\2\1\p\n\b\5\0\o\a\e\1\9\x\g\l\j\4\r\8\2\p\d\l\z\d\m\7\p\h\d\y\3\q\s\j\6\q\w\x\3\r\1\3\0\i\t\a\9\n\y\s\a\c\5\5\f\b\g\o\v\z\d\l\n\z\y\5\0\w\s\z\4\3\k\z\a\g\p\h\i\h\k\c\3\w\n\a\y\s\8\9\t\x\u\y\k\s\f\i\q\z\g\f\j\b\8\7\s\y\i\j\i\w\t\a\v\a\t\r\x\h\9\s\4\r\a\1\0\k\7\j\f\z\p\5\p\u\s\f\b\n\5\x\y\0\4\j\f\x\5\8\5\6\i\3\e\i\1\4\u\f\x\z\1\c\6\b\6\8\v\l\a\2\6\4\o\x\3\f\f\m\i\y\r\1\1\v\j\6\y\1\4\e\3\h\y\a\f\n\1\h\x\n\e\y\v\h\2\q\2\6\0\5\p\0\1\4\f\l\s\k\4\6\3\j\g\7\r\m\b\u\8\h\z\k\8\p\1\h\c\1\3\7\q\f\i\f\x\1\a\6\9\k\z\1\w\o\v\g\v\4\k\i\w\k\1\t\5\p\4\c\1\l\b\n\f\h\4\9\d\f\d\z\6\2\z\d\m\k\g\8\f\9\r\v\9\f\b\o\n\s\z\v\n\e\y\0\p\x\s\i\p\t\u\0\3\7\g\3\r\2\6\b\i\g\z\1\r\g\z\a\v\d\9\z\l\l\n\4\n\f\n\v\k\j\c\o\k\y\l\1\4\w\d\a\s\y\0\t\3\8\3\n\s\i\a\7\i\8\q\h\x\c\q\8\h\j\o\f\n\g\9\n\b\s\0\2\s\4\l\p\5\f\h\q\h\1\k\z\r\l\p\f\l\i\r ]] 00:05:53.999 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:53.999 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:53.999 [2024-11-17 13:15:43.099408] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:53.999 [2024-11-17 13:15:43.099508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:05:54.260 [2024-11-17 13:15:43.252090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.260 [2024-11-17 13:15:43.307263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.260 [2024-11-17 13:15:43.364988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.260  [2024-11-17T13:15:43.744Z] Copying: 512/512 [B] (average 500 kBps) 00:05:54.520 00:05:54.520 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7f23vogdb2pqzm5s862avkb57nvc3dycb5pkbiz2j9195t8dw0lszrve1mfduie4e6jmr4x3k18lpkh5737eb86a9ekag5ig0xlwhyqqnx21pnb50oae19xglj4r82pdlzdm7phdy3qsj6qwx3r130ita9nysac55fbgovzdlnzy50wsz43kzagphihkc3wnays89txuyksfiqzgfjb87syijiwtavatrxh9s4ra10k7jfzp5pusfbn5xy04jfx5856i3ei14ufxz1c6b68vla264ox3ffmiyr11vj6y14e3hyafn1hxneyvh2q2605p014flsk463jg7rmbu8hzk8p1hc137qfifx1a69kz1wovgv4kiwk1t5p4c1lbnfh49dfdz62zdmkg8f9rv9fbonszvney0pxsiptu037g3r26bigz1rgzavd9zlln4nfnvkjcokyl14wdasy0t383nsia7i8qhxcq8hjofng9nbs02s4lp5fhqh1kzrlpflir == \7\f\2\3\v\o\g\d\b\2\p\q\z\m\5\s\8\6\2\a\v\k\b\5\7\n\v\c\3\d\y\c\b\5\p\k\b\i\z\2\j\9\1\9\5\t\8\d\w\0\l\s\z\r\v\e\1\m\f\d\u\i\e\4\e\6\j\m\r\4\x\3\k\1\8\l\p\k\h\5\7\3\7\e\b\8\6\a\9\e\k\a\g\5\i\g\0\x\l\w\h\y\q\q\n\x\2\1\p\n\b\5\0\o\a\e\1\9\x\g\l\j\4\r\8\2\p\d\l\z\d\m\7\p\h\d\y\3\q\s\j\6\q\w\x\3\r\1\3\0\i\t\a\9\n\y\s\a\c\5\5\f\b\g\o\v\z\d\l\n\z\y\5\0\w\s\z\4\3\k\z\a\g\p\h\i\h\k\c\3\w\n\a\y\s\8\9\t\x\u\y\k\s\f\i\q\z\g\f\j\b\8\7\s\y\i\j\i\w\t\a\v\a\t\r\x\h\9\s\4\r\a\1\0\k\7\j\f\z\p\5\p\u\s\f\b\n\5\x\y\0\4\j\f\x\5\8\5\6\i\3\e\i\1\4\u\f\x\z\1\c\6\b\6\8\v\l\a\2\6\4\o\x\3\f\f\m\i\y\r\1\1\v\j\6\y\1\4\e\3\h\y\a\f\n\1\h\x\n\e\y\v\h\2\q\2\6\0\5\p\0\1\4\f\l\s\k\4\6\3\j\g\7\r\m\b\u\8\h\z\k\8\p\1\h\c\1\3\7\q\f\i\f\x\1\a\6\9\k\z\1\w\o\v\g\v\4\k\i\w\k\1\t\5\p\4\c\1\l\b\n\f\h\4\9\d\f\d\z\6\2\z\d\m\k\g\8\f\9\r\v\9\f\b\o\n\s\z\v\n\e\y\0\p\x\s\i\p\t\u\0\3\7\g\3\r\2\6\b\i\g\z\1\r\g\z\a\v\d\9\z\l\l\n\4\n\f\n\v\k\j\c\o\k\y\l\1\4\w\d\a\s\y\0\t\3\8\3\n\s\i\a\7\i\8\q\h\x\c\q\8\h\j\o\f\n\g\9\n\b\s\0\2\s\4\l\p\5\f\h\q\h\1\k\z\r\l\p\f\l\i\r ]] 00:05:54.520 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:54.520 13:15:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:54.520 [2024-11-17 13:15:43.653803] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:54.520 [2024-11-17 13:15:43.653897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:05:54.779 [2024-11-17 13:15:43.793838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.779 [2024-11-17 13:15:43.843937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.779 [2024-11-17 13:15:43.901376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.779  [2024-11-17T13:15:44.261Z] Copying: 512/512 [B] (average 166 kBps) 00:05:55.037 00:05:55.037 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7f23vogdb2pqzm5s862avkb57nvc3dycb5pkbiz2j9195t8dw0lszrve1mfduie4e6jmr4x3k18lpkh5737eb86a9ekag5ig0xlwhyqqnx21pnb50oae19xglj4r82pdlzdm7phdy3qsj6qwx3r130ita9nysac55fbgovzdlnzy50wsz43kzagphihkc3wnays89txuyksfiqzgfjb87syijiwtavatrxh9s4ra10k7jfzp5pusfbn5xy04jfx5856i3ei14ufxz1c6b68vla264ox3ffmiyr11vj6y14e3hyafn1hxneyvh2q2605p014flsk463jg7rmbu8hzk8p1hc137qfifx1a69kz1wovgv4kiwk1t5p4c1lbnfh49dfdz62zdmkg8f9rv9fbonszvney0pxsiptu037g3r26bigz1rgzavd9zlln4nfnvkjcokyl14wdasy0t383nsia7i8qhxcq8hjofng9nbs02s4lp5fhqh1kzrlpflir == \7\f\2\3\v\o\g\d\b\2\p\q\z\m\5\s\8\6\2\a\v\k\b\5\7\n\v\c\3\d\y\c\b\5\p\k\b\i\z\2\j\9\1\9\5\t\8\d\w\0\l\s\z\r\v\e\1\m\f\d\u\i\e\4\e\6\j\m\r\4\x\3\k\1\8\l\p\k\h\5\7\3\7\e\b\8\6\a\9\e\k\a\g\5\i\g\0\x\l\w\h\y\q\q\n\x\2\1\p\n\b\5\0\o\a\e\1\9\x\g\l\j\4\r\8\2\p\d\l\z\d\m\7\p\h\d\y\3\q\s\j\6\q\w\x\3\r\1\3\0\i\t\a\9\n\y\s\a\c\5\5\f\b\g\o\v\z\d\l\n\z\y\5\0\w\s\z\4\3\k\z\a\g\p\h\i\h\k\c\3\w\n\a\y\s\8\9\t\x\u\y\k\s\f\i\q\z\g\f\j\b\8\7\s\y\i\j\i\w\t\a\v\a\t\r\x\h\9\s\4\r\a\1\0\k\7\j\f\z\p\5\p\u\s\f\b\n\5\x\y\0\4\j\f\x\5\8\5\6\i\3\e\i\1\4\u\f\x\z\1\c\6\b\6\8\v\l\a\2\6\4\o\x\3\f\f\m\i\y\r\1\1\v\j\6\y\1\4\e\3\h\y\a\f\n\1\h\x\n\e\y\v\h\2\q\2\6\0\5\p\0\1\4\f\l\s\k\4\6\3\j\g\7\r\m\b\u\8\h\z\k\8\p\1\h\c\1\3\7\q\f\i\f\x\1\a\6\9\k\z\1\w\o\v\g\v\4\k\i\w\k\1\t\5\p\4\c\1\l\b\n\f\h\4\9\d\f\d\z\6\2\z\d\m\k\g\8\f\9\r\v\9\f\b\o\n\s\z\v\n\e\y\0\p\x\s\i\p\t\u\0\3\7\g\3\r\2\6\b\i\g\z\1\r\g\z\a\v\d\9\z\l\l\n\4\n\f\n\v\k\j\c\o\k\y\l\1\4\w\d\a\s\y\0\t\3\8\3\n\s\i\a\7\i\8\q\h\x\c\q\8\h\j\o\f\n\g\9\n\b\s\0\2\s\4\l\p\5\f\h\q\h\1\k\z\r\l\p\f\l\i\r ]] 00:05:55.037 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:55.037 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:55.037 [2024-11-17 13:15:44.195500] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:55.037 [2024-11-17 13:15:44.195579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:05:55.296 [2024-11-17 13:15:44.332080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.296 [2024-11-17 13:15:44.375434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.296 [2024-11-17 13:15:44.426822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.296  [2024-11-17T13:15:44.779Z] Copying: 512/512 [B] (average 500 kBps) 00:05:55.555 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 7f23vogdb2pqzm5s862avkb57nvc3dycb5pkbiz2j9195t8dw0lszrve1mfduie4e6jmr4x3k18lpkh5737eb86a9ekag5ig0xlwhyqqnx21pnb50oae19xglj4r82pdlzdm7phdy3qsj6qwx3r130ita9nysac55fbgovzdlnzy50wsz43kzagphihkc3wnays89txuyksfiqzgfjb87syijiwtavatrxh9s4ra10k7jfzp5pusfbn5xy04jfx5856i3ei14ufxz1c6b68vla264ox3ffmiyr11vj6y14e3hyafn1hxneyvh2q2605p014flsk463jg7rmbu8hzk8p1hc137qfifx1a69kz1wovgv4kiwk1t5p4c1lbnfh49dfdz62zdmkg8f9rv9fbonszvney0pxsiptu037g3r26bigz1rgzavd9zlln4nfnvkjcokyl14wdasy0t383nsia7i8qhxcq8hjofng9nbs02s4lp5fhqh1kzrlpflir == \7\f\2\3\v\o\g\d\b\2\p\q\z\m\5\s\8\6\2\a\v\k\b\5\7\n\v\c\3\d\y\c\b\5\p\k\b\i\z\2\j\9\1\9\5\t\8\d\w\0\l\s\z\r\v\e\1\m\f\d\u\i\e\4\e\6\j\m\r\4\x\3\k\1\8\l\p\k\h\5\7\3\7\e\b\8\6\a\9\e\k\a\g\5\i\g\0\x\l\w\h\y\q\q\n\x\2\1\p\n\b\5\0\o\a\e\1\9\x\g\l\j\4\r\8\2\p\d\l\z\d\m\7\p\h\d\y\3\q\s\j\6\q\w\x\3\r\1\3\0\i\t\a\9\n\y\s\a\c\5\5\f\b\g\o\v\z\d\l\n\z\y\5\0\w\s\z\4\3\k\z\a\g\p\h\i\h\k\c\3\w\n\a\y\s\8\9\t\x\u\y\k\s\f\i\q\z\g\f\j\b\8\7\s\y\i\j\i\w\t\a\v\a\t\r\x\h\9\s\4\r\a\1\0\k\7\j\f\z\p\5\p\u\s\f\b\n\5\x\y\0\4\j\f\x\5\8\5\6\i\3\e\i\1\4\u\f\x\z\1\c\6\b\6\8\v\l\a\2\6\4\o\x\3\f\f\m\i\y\r\1\1\v\j\6\y\1\4\e\3\h\y\a\f\n\1\h\x\n\e\y\v\h\2\q\2\6\0\5\p\0\1\4\f\l\s\k\4\6\3\j\g\7\r\m\b\u\8\h\z\k\8\p\1\h\c\1\3\7\q\f\i\f\x\1\a\6\9\k\z\1\w\o\v\g\v\4\k\i\w\k\1\t\5\p\4\c\1\l\b\n\f\h\4\9\d\f\d\z\6\2\z\d\m\k\g\8\f\9\r\v\9\f\b\o\n\s\z\v\n\e\y\0\p\x\s\i\p\t\u\0\3\7\g\3\r\2\6\b\i\g\z\1\r\g\z\a\v\d\9\z\l\l\n\4\n\f\n\v\k\j\c\o\k\y\l\1\4\w\d\a\s\y\0\t\3\8\3\n\s\i\a\7\i\8\q\h\x\c\q\8\h\j\o\f\n\g\9\n\b\s\0\2\s\4\l\p\5\f\h\q\h\1\k\z\r\l\p\f\l\i\r ]] 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:55.555 13:15:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:55.555 [2024-11-17 13:15:44.709600] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:55.555 [2024-11-17 13:15:44.709720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:05:55.814 [2024-11-17 13:15:44.847792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.814 [2024-11-17 13:15:44.892372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.814 [2024-11-17 13:15:44.946090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.814  [2024-11-17T13:15:45.296Z] Copying: 512/512 [B] (average 500 kBps) 00:05:56.072 00:05:56.072 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzaa3p3jftxxnh0rpuu88vvgsuj0gqqa0s76jonrgf7mvo4wenz4zhkxd6ehjaskq1s3352ir0dsice8zlm2k6ljdwsyt4j46ktsvd4dm2xe7il1809urylwfrj7j0tz4zrur6xwtx7rlbov52m57kbypnqbaki2be6wab4cxmxu0r4ej31i91g105mqt39gcux8ohs1mae29kmichq4oi68exnesb1lb6u8pposesu3ugm8wn360a3jp78itsujufi5u12k4nyulx7ngcpvskz9cxhpnispzpmk7bjzcx76qwdc33qpqnba7617iyzvcq24g4qxa0a4xf7cpxrqq0hnl4n6wfh45qcg75517mkf4ptsg6hk3u7dj79ofeh5nl6guonlfn62iw65cb6fvhberfpb5o7hhm9ud79tltzwbaklgp4xgvclk7kaqp76ihboo3cmnyftwnbm7cqduwghs9860823w48f6n2q5mtlmrabxtfjjarjkjcomlmi == \y\z\a\a\3\p\3\j\f\t\x\x\n\h\0\r\p\u\u\8\8\v\v\g\s\u\j\0\g\q\q\a\0\s\7\6\j\o\n\r\g\f\7\m\v\o\4\w\e\n\z\4\z\h\k\x\d\6\e\h\j\a\s\k\q\1\s\3\3\5\2\i\r\0\d\s\i\c\e\8\z\l\m\2\k\6\l\j\d\w\s\y\t\4\j\4\6\k\t\s\v\d\4\d\m\2\x\e\7\i\l\1\8\0\9\u\r\y\l\w\f\r\j\7\j\0\t\z\4\z\r\u\r\6\x\w\t\x\7\r\l\b\o\v\5\2\m\5\7\k\b\y\p\n\q\b\a\k\i\2\b\e\6\w\a\b\4\c\x\m\x\u\0\r\4\e\j\3\1\i\9\1\g\1\0\5\m\q\t\3\9\g\c\u\x\8\o\h\s\1\m\a\e\2\9\k\m\i\c\h\q\4\o\i\6\8\e\x\n\e\s\b\1\l\b\6\u\8\p\p\o\s\e\s\u\3\u\g\m\8\w\n\3\6\0\a\3\j\p\7\8\i\t\s\u\j\u\f\i\5\u\1\2\k\4\n\y\u\l\x\7\n\g\c\p\v\s\k\z\9\c\x\h\p\n\i\s\p\z\p\m\k\7\b\j\z\c\x\7\6\q\w\d\c\3\3\q\p\q\n\b\a\7\6\1\7\i\y\z\v\c\q\2\4\g\4\q\x\a\0\a\4\x\f\7\c\p\x\r\q\q\0\h\n\l\4\n\6\w\f\h\4\5\q\c\g\7\5\5\1\7\m\k\f\4\p\t\s\g\6\h\k\3\u\7\d\j\7\9\o\f\e\h\5\n\l\6\g\u\o\n\l\f\n\6\2\i\w\6\5\c\b\6\f\v\h\b\e\r\f\p\b\5\o\7\h\h\m\9\u\d\7\9\t\l\t\z\w\b\a\k\l\g\p\4\x\g\v\c\l\k\7\k\a\q\p\7\6\i\h\b\o\o\3\c\m\n\y\f\t\w\n\b\m\7\c\q\d\u\w\g\h\s\9\8\6\0\8\2\3\w\4\8\f\6\n\2\q\5\m\t\l\m\r\a\b\x\t\f\j\j\a\r\j\k\j\c\o\m\l\m\i ]] 00:05:56.072 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.072 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:56.072 [2024-11-17 13:15:45.220477] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:56.073 [2024-11-17 13:15:45.220592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:05:56.331 [2024-11-17 13:15:45.359740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.331 [2024-11-17 13:15:45.418112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.331 [2024-11-17 13:15:45.471440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.331  [2024-11-17T13:15:45.814Z] Copying: 512/512 [B] (average 500 kBps) 00:05:56.590 00:05:56.591 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzaa3p3jftxxnh0rpuu88vvgsuj0gqqa0s76jonrgf7mvo4wenz4zhkxd6ehjaskq1s3352ir0dsice8zlm2k6ljdwsyt4j46ktsvd4dm2xe7il1809urylwfrj7j0tz4zrur6xwtx7rlbov52m57kbypnqbaki2be6wab4cxmxu0r4ej31i91g105mqt39gcux8ohs1mae29kmichq4oi68exnesb1lb6u8pposesu3ugm8wn360a3jp78itsujufi5u12k4nyulx7ngcpvskz9cxhpnispzpmk7bjzcx76qwdc33qpqnba7617iyzvcq24g4qxa0a4xf7cpxrqq0hnl4n6wfh45qcg75517mkf4ptsg6hk3u7dj79ofeh5nl6guonlfn62iw65cb6fvhberfpb5o7hhm9ud79tltzwbaklgp4xgvclk7kaqp76ihboo3cmnyftwnbm7cqduwghs9860823w48f6n2q5mtlmrabxtfjjarjkjcomlmi == \y\z\a\a\3\p\3\j\f\t\x\x\n\h\0\r\p\u\u\8\8\v\v\g\s\u\j\0\g\q\q\a\0\s\7\6\j\o\n\r\g\f\7\m\v\o\4\w\e\n\z\4\z\h\k\x\d\6\e\h\j\a\s\k\q\1\s\3\3\5\2\i\r\0\d\s\i\c\e\8\z\l\m\2\k\6\l\j\d\w\s\y\t\4\j\4\6\k\t\s\v\d\4\d\m\2\x\e\7\i\l\1\8\0\9\u\r\y\l\w\f\r\j\7\j\0\t\z\4\z\r\u\r\6\x\w\t\x\7\r\l\b\o\v\5\2\m\5\7\k\b\y\p\n\q\b\a\k\i\2\b\e\6\w\a\b\4\c\x\m\x\u\0\r\4\e\j\3\1\i\9\1\g\1\0\5\m\q\t\3\9\g\c\u\x\8\o\h\s\1\m\a\e\2\9\k\m\i\c\h\q\4\o\i\6\8\e\x\n\e\s\b\1\l\b\6\u\8\p\p\o\s\e\s\u\3\u\g\m\8\w\n\3\6\0\a\3\j\p\7\8\i\t\s\u\j\u\f\i\5\u\1\2\k\4\n\y\u\l\x\7\n\g\c\p\v\s\k\z\9\c\x\h\p\n\i\s\p\z\p\m\k\7\b\j\z\c\x\7\6\q\w\d\c\3\3\q\p\q\n\b\a\7\6\1\7\i\y\z\v\c\q\2\4\g\4\q\x\a\0\a\4\x\f\7\c\p\x\r\q\q\0\h\n\l\4\n\6\w\f\h\4\5\q\c\g\7\5\5\1\7\m\k\f\4\p\t\s\g\6\h\k\3\u\7\d\j\7\9\o\f\e\h\5\n\l\6\g\u\o\n\l\f\n\6\2\i\w\6\5\c\b\6\f\v\h\b\e\r\f\p\b\5\o\7\h\h\m\9\u\d\7\9\t\l\t\z\w\b\a\k\l\g\p\4\x\g\v\c\l\k\7\k\a\q\p\7\6\i\h\b\o\o\3\c\m\n\y\f\t\w\n\b\m\7\c\q\d\u\w\g\h\s\9\8\6\0\8\2\3\w\4\8\f\6\n\2\q\5\m\t\l\m\r\a\b\x\t\f\j\j\a\r\j\k\j\c\o\m\l\m\i ]] 00:05:56.591 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.591 13:15:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:56.591 [2024-11-17 13:15:45.756153] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:56.591 [2024-11-17 13:15:45.756289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:05:56.849 [2024-11-17 13:15:45.904577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.849 [2024-11-17 13:15:45.955416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.849 [2024-11-17 13:15:46.009401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.849  [2024-11-17T13:15:46.332Z] Copying: 512/512 [B] (average 166 kBps) 00:05:57.108 00:05:57.109 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzaa3p3jftxxnh0rpuu88vvgsuj0gqqa0s76jonrgf7mvo4wenz4zhkxd6ehjaskq1s3352ir0dsice8zlm2k6ljdwsyt4j46ktsvd4dm2xe7il1809urylwfrj7j0tz4zrur6xwtx7rlbov52m57kbypnqbaki2be6wab4cxmxu0r4ej31i91g105mqt39gcux8ohs1mae29kmichq4oi68exnesb1lb6u8pposesu3ugm8wn360a3jp78itsujufi5u12k4nyulx7ngcpvskz9cxhpnispzpmk7bjzcx76qwdc33qpqnba7617iyzvcq24g4qxa0a4xf7cpxrqq0hnl4n6wfh45qcg75517mkf4ptsg6hk3u7dj79ofeh5nl6guonlfn62iw65cb6fvhberfpb5o7hhm9ud79tltzwbaklgp4xgvclk7kaqp76ihboo3cmnyftwnbm7cqduwghs9860823w48f6n2q5mtlmrabxtfjjarjkjcomlmi == \y\z\a\a\3\p\3\j\f\t\x\x\n\h\0\r\p\u\u\8\8\v\v\g\s\u\j\0\g\q\q\a\0\s\7\6\j\o\n\r\g\f\7\m\v\o\4\w\e\n\z\4\z\h\k\x\d\6\e\h\j\a\s\k\q\1\s\3\3\5\2\i\r\0\d\s\i\c\e\8\z\l\m\2\k\6\l\j\d\w\s\y\t\4\j\4\6\k\t\s\v\d\4\d\m\2\x\e\7\i\l\1\8\0\9\u\r\y\l\w\f\r\j\7\j\0\t\z\4\z\r\u\r\6\x\w\t\x\7\r\l\b\o\v\5\2\m\5\7\k\b\y\p\n\q\b\a\k\i\2\b\e\6\w\a\b\4\c\x\m\x\u\0\r\4\e\j\3\1\i\9\1\g\1\0\5\m\q\t\3\9\g\c\u\x\8\o\h\s\1\m\a\e\2\9\k\m\i\c\h\q\4\o\i\6\8\e\x\n\e\s\b\1\l\b\6\u\8\p\p\o\s\e\s\u\3\u\g\m\8\w\n\3\6\0\a\3\j\p\7\8\i\t\s\u\j\u\f\i\5\u\1\2\k\4\n\y\u\l\x\7\n\g\c\p\v\s\k\z\9\c\x\h\p\n\i\s\p\z\p\m\k\7\b\j\z\c\x\7\6\q\w\d\c\3\3\q\p\q\n\b\a\7\6\1\7\i\y\z\v\c\q\2\4\g\4\q\x\a\0\a\4\x\f\7\c\p\x\r\q\q\0\h\n\l\4\n\6\w\f\h\4\5\q\c\g\7\5\5\1\7\m\k\f\4\p\t\s\g\6\h\k\3\u\7\d\j\7\9\o\f\e\h\5\n\l\6\g\u\o\n\l\f\n\6\2\i\w\6\5\c\b\6\f\v\h\b\e\r\f\p\b\5\o\7\h\h\m\9\u\d\7\9\t\l\t\z\w\b\a\k\l\g\p\4\x\g\v\c\l\k\7\k\a\q\p\7\6\i\h\b\o\o\3\c\m\n\y\f\t\w\n\b\m\7\c\q\d\u\w\g\h\s\9\8\6\0\8\2\3\w\4\8\f\6\n\2\q\5\m\t\l\m\r\a\b\x\t\f\j\j\a\r\j\k\j\c\o\m\l\m\i ]] 00:05:57.109 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:57.109 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:57.109 [2024-11-17 13:15:46.287938] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:57.109 [2024-11-17 13:15:46.288021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60620 ] 00:05:57.369 [2024-11-17 13:15:46.425275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.369 [2024-11-17 13:15:46.467780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.369 [2024-11-17 13:15:46.520875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.369  [2024-11-17T13:15:46.852Z] Copying: 512/512 [B] (average 250 kBps) 00:05:57.628 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzaa3p3jftxxnh0rpuu88vvgsuj0gqqa0s76jonrgf7mvo4wenz4zhkxd6ehjaskq1s3352ir0dsice8zlm2k6ljdwsyt4j46ktsvd4dm2xe7il1809urylwfrj7j0tz4zrur6xwtx7rlbov52m57kbypnqbaki2be6wab4cxmxu0r4ej31i91g105mqt39gcux8ohs1mae29kmichq4oi68exnesb1lb6u8pposesu3ugm8wn360a3jp78itsujufi5u12k4nyulx7ngcpvskz9cxhpnispzpmk7bjzcx76qwdc33qpqnba7617iyzvcq24g4qxa0a4xf7cpxrqq0hnl4n6wfh45qcg75517mkf4ptsg6hk3u7dj79ofeh5nl6guonlfn62iw65cb6fvhberfpb5o7hhm9ud79tltzwbaklgp4xgvclk7kaqp76ihboo3cmnyftwnbm7cqduwghs9860823w48f6n2q5mtlmrabxtfjjarjkjcomlmi == \y\z\a\a\3\p\3\j\f\t\x\x\n\h\0\r\p\u\u\8\8\v\v\g\s\u\j\0\g\q\q\a\0\s\7\6\j\o\n\r\g\f\7\m\v\o\4\w\e\n\z\4\z\h\k\x\d\6\e\h\j\a\s\k\q\1\s\3\3\5\2\i\r\0\d\s\i\c\e\8\z\l\m\2\k\6\l\j\d\w\s\y\t\4\j\4\6\k\t\s\v\d\4\d\m\2\x\e\7\i\l\1\8\0\9\u\r\y\l\w\f\r\j\7\j\0\t\z\4\z\r\u\r\6\x\w\t\x\7\r\l\b\o\v\5\2\m\5\7\k\b\y\p\n\q\b\a\k\i\2\b\e\6\w\a\b\4\c\x\m\x\u\0\r\4\e\j\3\1\i\9\1\g\1\0\5\m\q\t\3\9\g\c\u\x\8\o\h\s\1\m\a\e\2\9\k\m\i\c\h\q\4\o\i\6\8\e\x\n\e\s\b\1\l\b\6\u\8\p\p\o\s\e\s\u\3\u\g\m\8\w\n\3\6\0\a\3\j\p\7\8\i\t\s\u\j\u\f\i\5\u\1\2\k\4\n\y\u\l\x\7\n\g\c\p\v\s\k\z\9\c\x\h\p\n\i\s\p\z\p\m\k\7\b\j\z\c\x\7\6\q\w\d\c\3\3\q\p\q\n\b\a\7\6\1\7\i\y\z\v\c\q\2\4\g\4\q\x\a\0\a\4\x\f\7\c\p\x\r\q\q\0\h\n\l\4\n\6\w\f\h\4\5\q\c\g\7\5\5\1\7\m\k\f\4\p\t\s\g\6\h\k\3\u\7\d\j\7\9\o\f\e\h\5\n\l\6\g\u\o\n\l\f\n\6\2\i\w\6\5\c\b\6\f\v\h\b\e\r\f\p\b\5\o\7\h\h\m\9\u\d\7\9\t\l\t\z\w\b\a\k\l\g\p\4\x\g\v\c\l\k\7\k\a\q\p\7\6\i\h\b\o\o\3\c\m\n\y\f\t\w\n\b\m\7\c\q\d\u\w\g\h\s\9\8\6\0\8\2\3\w\4\8\f\6\n\2\q\5\m\t\l\m\r\a\b\x\t\f\j\j\a\r\j\k\j\c\o\m\l\m\i ]] 00:05:57.628 00:05:57.628 real 0m4.300s 00:05:57.628 user 0m2.241s 00:05:57.628 sys 0m1.069s 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:57.628 ************************************ 00:05:57.628 END TEST dd_flags_misc_forced_aio 00:05:57.628 ************************************ 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:57.628 00:05:57.628 real 0m19.810s 00:05:57.628 user 0m9.393s 00:05:57.628 sys 0m6.317s 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.628 13:15:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:57.628 ************************************ 00:05:57.628 END TEST spdk_dd_posix 00:05:57.628 ************************************ 00:05:57.887 13:15:46 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:57.887 13:15:46 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.887 13:15:46 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.887 13:15:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:57.887 ************************************ 00:05:57.887 START TEST spdk_dd_malloc 00:05:57.887 ************************************ 00:05:57.887 13:15:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:57.887 * Looking for test storage... 00:05:57.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:57.887 13:15:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.887 13:15:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.887 13:15:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.887 --rc genhtml_branch_coverage=1 00:05:57.887 --rc genhtml_function_coverage=1 00:05:57.887 --rc genhtml_legend=1 00:05:57.887 --rc geninfo_all_blocks=1 00:05:57.887 --rc geninfo_unexecuted_blocks=1 00:05:57.887 00:05:57.887 ' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.887 --rc genhtml_branch_coverage=1 00:05:57.887 --rc genhtml_function_coverage=1 00:05:57.887 --rc genhtml_legend=1 00:05:57.887 --rc geninfo_all_blocks=1 00:05:57.887 --rc geninfo_unexecuted_blocks=1 00:05:57.887 00:05:57.887 ' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.887 --rc genhtml_branch_coverage=1 00:05:57.887 --rc genhtml_function_coverage=1 00:05:57.887 --rc genhtml_legend=1 00:05:57.887 --rc geninfo_all_blocks=1 00:05:57.887 --rc geninfo_unexecuted_blocks=1 00:05:57.887 00:05:57.887 ' 00:05:57.887 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.887 --rc genhtml_branch_coverage=1 00:05:57.887 --rc genhtml_function_coverage=1 00:05:57.887 --rc genhtml_legend=1 00:05:57.888 --rc geninfo_all_blocks=1 00:05:57.888 --rc geninfo_unexecuted_blocks=1 00:05:57.888 00:05:57.888 ' 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:57.888 ************************************ 00:05:57.888 START TEST dd_malloc_copy 00:05:57.888 ************************************ 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:57.888 13:15:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:58.145 [2024-11-17 13:15:47.126322] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:58.145 [2024-11-17 13:15:47.126905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:05:58.145 { 00:05:58.145 "subsystems": [ 00:05:58.145 { 00:05:58.145 "subsystem": "bdev", 00:05:58.146 "config": [ 00:05:58.146 { 00:05:58.146 "params": { 00:05:58.146 "block_size": 512, 00:05:58.146 "num_blocks": 1048576, 00:05:58.146 "name": "malloc0" 00:05:58.146 }, 00:05:58.146 "method": "bdev_malloc_create" 00:05:58.146 }, 00:05:58.146 { 00:05:58.146 "params": { 00:05:58.146 "block_size": 512, 00:05:58.146 "num_blocks": 1048576, 00:05:58.146 "name": "malloc1" 00:05:58.146 }, 00:05:58.146 "method": "bdev_malloc_create" 00:05:58.146 }, 00:05:58.146 { 00:05:58.146 "method": "bdev_wait_for_examine" 00:05:58.146 } 00:05:58.146 ] 00:05:58.146 } 00:05:58.146 ] 00:05:58.146 } 00:05:58.146 [2024-11-17 13:15:47.272833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.146 [2024-11-17 13:15:47.317066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.404 [2024-11-17 13:15:47.368853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.779  [2024-11-17T13:15:49.940Z] Copying: 221/512 [MB] (221 MBps) [2024-11-17T13:15:50.199Z] Copying: 452/512 [MB] (230 MBps) [2024-11-17T13:15:50.766Z] Copying: 512/512 [MB] (average 217 MBps) 00:06:01.542 00:06:01.542 13:15:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:01.542 13:15:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:01.542 13:15:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:01.542 13:15:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:01.542 [2024-11-17 13:15:50.753418] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:01.542 [2024-11-17 13:15:50.753521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60746 ] 00:06:01.542 { 00:06:01.542 "subsystems": [ 00:06:01.542 { 00:06:01.542 "subsystem": "bdev", 00:06:01.542 "config": [ 00:06:01.542 { 00:06:01.542 "params": { 00:06:01.542 "block_size": 512, 00:06:01.542 "num_blocks": 1048576, 00:06:01.542 "name": "malloc0" 00:06:01.542 }, 00:06:01.542 "method": "bdev_malloc_create" 00:06:01.542 }, 00:06:01.542 { 00:06:01.542 "params": { 00:06:01.542 "block_size": 512, 00:06:01.542 "num_blocks": 1048576, 00:06:01.542 "name": "malloc1" 00:06:01.542 }, 00:06:01.542 "method": "bdev_malloc_create" 00:06:01.542 }, 00:06:01.542 { 00:06:01.543 "method": "bdev_wait_for_examine" 00:06:01.543 } 00:06:01.543 ] 00:06:01.543 } 00:06:01.543 ] 00:06:01.543 } 00:06:01.801 [2024-11-17 13:15:50.904812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.801 [2024-11-17 13:15:50.964191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.061 [2024-11-17 13:15:51.027816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.439  [2024-11-17T13:15:53.625Z] Copying: 229/512 [MB] (229 MBps) [2024-11-17T13:15:53.889Z] Copying: 449/512 [MB] (220 MBps) [2024-11-17T13:15:54.456Z] Copying: 512/512 [MB] (average 224 MBps) 00:06:05.232 00:06:05.232 00:06:05.232 real 0m7.198s 00:06:05.232 user 0m6.146s 00:06:05.232 sys 0m0.894s 00:06:05.232 13:15:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.232 13:15:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 ************************************ 00:06:05.232 END TEST dd_malloc_copy 00:06:05.232 ************************************ 00:06:05.232 00:06:05.232 real 0m7.454s 00:06:05.232 user 0m6.283s 00:06:05.232 sys 0m1.017s 00:06:05.232 13:15:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.232 ************************************ 00:06:05.232 END TEST spdk_dd_malloc 00:06:05.232 ************************************ 00:06:05.232 13:15:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 13:15:54 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:05.232 13:15:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.232 13:15:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.232 13:15:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:05.232 ************************************ 00:06:05.232 START TEST spdk_dd_bdev_to_bdev 00:06:05.232 ************************************ 00:06:05.232 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:05.232 * Looking for test storage... 00:06:05.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.492 --rc genhtml_branch_coverage=1 00:06:05.492 --rc genhtml_function_coverage=1 00:06:05.492 --rc genhtml_legend=1 00:06:05.492 --rc geninfo_all_blocks=1 00:06:05.492 --rc geninfo_unexecuted_blocks=1 00:06:05.492 00:06:05.492 ' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.492 --rc genhtml_branch_coverage=1 00:06:05.492 --rc genhtml_function_coverage=1 00:06:05.492 --rc genhtml_legend=1 00:06:05.492 --rc geninfo_all_blocks=1 00:06:05.492 --rc geninfo_unexecuted_blocks=1 00:06:05.492 00:06:05.492 ' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.492 --rc genhtml_branch_coverage=1 00:06:05.492 --rc genhtml_function_coverage=1 00:06:05.492 --rc genhtml_legend=1 00:06:05.492 --rc geninfo_all_blocks=1 00:06:05.492 --rc geninfo_unexecuted_blocks=1 00:06:05.492 00:06:05.492 ' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.492 --rc genhtml_branch_coverage=1 00:06:05.492 --rc genhtml_function_coverage=1 00:06:05.492 --rc genhtml_legend=1 00:06:05.492 --rc geninfo_all_blocks=1 00:06:05.492 --rc geninfo_unexecuted_blocks=1 00:06:05.492 00:06:05.492 ' 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.492 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:05.493 ************************************ 00:06:05.493 START TEST dd_inflate_file 00:06:05.493 ************************************ 00:06:05.493 13:15:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:05.493 [2024-11-17 13:15:54.646921] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:05.493 [2024-11-17 13:15:54.647033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:06:05.751 [2024-11-17 13:15:54.797373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.751 [2024-11-17 13:15:54.869195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.751 [2024-11-17 13:15:54.928315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.010  [2024-11-17T13:15:55.234Z] Copying: 64/64 [MB] (average 1422 MBps) 00:06:06.010 00:06:06.010 00:06:06.010 real 0m0.617s 00:06:06.010 user 0m0.369s 00:06:06.010 sys 0m0.316s 00:06:06.010 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.010 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:06.010 ************************************ 00:06:06.010 END TEST dd_inflate_file 00:06:06.010 ************************************ 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:06.268 ************************************ 00:06:06.268 START TEST dd_copy_to_out_bdev 00:06:06.268 ************************************ 00:06:06.268 13:15:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:06.268 { 00:06:06.269 "subsystems": [ 00:06:06.269 { 00:06:06.269 "subsystem": "bdev", 00:06:06.269 "config": [ 00:06:06.269 { 00:06:06.269 "params": { 00:06:06.269 "trtype": "pcie", 00:06:06.269 "traddr": "0000:00:10.0", 00:06:06.269 "name": "Nvme0" 00:06:06.269 }, 00:06:06.269 "method": "bdev_nvme_attach_controller" 00:06:06.269 }, 00:06:06.269 { 00:06:06.269 "params": { 00:06:06.269 "trtype": "pcie", 00:06:06.269 "traddr": "0000:00:11.0", 00:06:06.269 "name": "Nvme1" 00:06:06.269 }, 00:06:06.269 "method": "bdev_nvme_attach_controller" 00:06:06.269 }, 00:06:06.269 { 00:06:06.269 "method": "bdev_wait_for_examine" 00:06:06.269 } 00:06:06.269 ] 00:06:06.269 } 00:06:06.269 ] 00:06:06.269 } 00:06:06.269 [2024-11-17 13:15:55.323817] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:06.269 [2024-11-17 13:15:55.323953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60900 ] 00:06:06.269 [2024-11-17 13:15:55.473296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.528 [2024-11-17 13:15:55.523195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.528 [2024-11-17 13:15:55.577816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.903  [2024-11-17T13:15:57.127Z] Copying: 52/64 [MB] (52 MBps) [2024-11-17T13:15:57.385Z] Copying: 64/64 [MB] (average 52 MBps) 00:06:08.161 00:06:08.161 00:06:08.161 real 0m1.946s 00:06:08.161 user 0m1.715s 00:06:08.161 sys 0m1.580s 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:08.161 ************************************ 00:06:08.161 END TEST dd_copy_to_out_bdev 00:06:08.161 ************************************ 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:08.161 ************************************ 00:06:08.161 START TEST dd_offset_magic 00:06:08.161 ************************************ 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:08.161 13:15:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:08.161 [2024-11-17 13:15:57.332547] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:08.161 [2024-11-17 13:15:57.332671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60945 ] 00:06:08.162 { 00:06:08.162 "subsystems": [ 00:06:08.162 { 00:06:08.162 "subsystem": "bdev", 00:06:08.162 "config": [ 00:06:08.162 { 00:06:08.162 "params": { 00:06:08.162 "trtype": "pcie", 00:06:08.162 "traddr": "0000:00:10.0", 00:06:08.162 "name": "Nvme0" 00:06:08.162 }, 00:06:08.162 "method": "bdev_nvme_attach_controller" 00:06:08.162 }, 00:06:08.162 { 00:06:08.162 "params": { 00:06:08.162 "trtype": "pcie", 00:06:08.162 "traddr": "0000:00:11.0", 00:06:08.162 "name": "Nvme1" 00:06:08.162 }, 00:06:08.162 "method": "bdev_nvme_attach_controller" 00:06:08.162 }, 00:06:08.162 { 00:06:08.162 "method": "bdev_wait_for_examine" 00:06:08.162 } 00:06:08.162 ] 00:06:08.162 } 00:06:08.162 ] 00:06:08.162 } 00:06:08.420 [2024-11-17 13:15:57.475427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.420 [2024-11-17 13:15:57.534215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.420 [2024-11-17 13:15:57.589521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.677  [2024-11-17T13:15:58.160Z] Copying: 65/65 [MB] (average 890 MBps) 00:06:08.936 00:06:08.936 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:08.936 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:08.936 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:08.936 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:08.936 [2024-11-17 13:15:58.122236] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:08.936 [2024-11-17 13:15:58.122327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60965 ] 00:06:08.936 { 00:06:08.936 "subsystems": [ 00:06:08.936 { 00:06:08.936 "subsystem": "bdev", 00:06:08.936 "config": [ 00:06:08.936 { 00:06:08.936 "params": { 00:06:08.936 "trtype": "pcie", 00:06:08.936 "traddr": "0000:00:10.0", 00:06:08.936 "name": "Nvme0" 00:06:08.936 }, 00:06:08.936 "method": "bdev_nvme_attach_controller" 00:06:08.936 }, 00:06:08.936 { 00:06:08.936 "params": { 00:06:08.936 "trtype": "pcie", 00:06:08.936 "traddr": "0000:00:11.0", 00:06:08.936 "name": "Nvme1" 00:06:08.936 }, 00:06:08.936 "method": "bdev_nvme_attach_controller" 00:06:08.936 }, 00:06:08.936 { 00:06:08.936 "method": "bdev_wait_for_examine" 00:06:08.936 } 00:06:08.936 ] 00:06:08.936 } 00:06:08.936 ] 00:06:08.936 } 00:06:09.195 [2024-11-17 13:15:58.261835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.195 [2024-11-17 13:15:58.323418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.195 [2024-11-17 13:15:58.377256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.454  [2024-11-17T13:15:58.937Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:09.713 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:09.713 13:15:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:09.713 { 00:06:09.713 "subsystems": [ 00:06:09.713 { 00:06:09.713 "subsystem": "bdev", 00:06:09.713 "config": [ 00:06:09.713 { 00:06:09.713 "params": { 00:06:09.713 "trtype": "pcie", 00:06:09.713 "traddr": "0000:00:10.0", 00:06:09.713 "name": "Nvme0" 00:06:09.713 }, 00:06:09.713 "method": "bdev_nvme_attach_controller" 00:06:09.713 }, 00:06:09.713 { 00:06:09.713 "params": { 00:06:09.713 "trtype": "pcie", 00:06:09.713 "traddr": "0000:00:11.0", 00:06:09.713 "name": "Nvme1" 00:06:09.713 }, 00:06:09.714 "method": "bdev_nvme_attach_controller" 00:06:09.714 }, 00:06:09.714 { 00:06:09.714 "method": "bdev_wait_for_examine" 00:06:09.714 } 00:06:09.714 ] 00:06:09.714 } 00:06:09.714 ] 00:06:09.714 } 00:06:09.714 [2024-11-17 13:15:58.803343] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:09.714 [2024-11-17 13:15:58.803458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60980 ] 00:06:09.973 [2024-11-17 13:15:58.949102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.973 [2024-11-17 13:15:58.999974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.973 [2024-11-17 13:15:59.055262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.231  [2024-11-17T13:15:59.713Z] Copying: 65/65 [MB] (average 902 MBps) 00:06:10.489 00:06:10.489 13:15:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:10.489 13:15:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:10.490 13:15:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:10.490 13:15:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:10.490 [2024-11-17 13:15:59.599532] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:10.490 [2024-11-17 13:15:59.599670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:06:10.490 { 00:06:10.490 "subsystems": [ 00:06:10.490 { 00:06:10.490 "subsystem": "bdev", 00:06:10.490 "config": [ 00:06:10.490 { 00:06:10.490 "params": { 00:06:10.490 "trtype": "pcie", 00:06:10.490 "traddr": "0000:00:10.0", 00:06:10.490 "name": "Nvme0" 00:06:10.490 }, 00:06:10.490 "method": "bdev_nvme_attach_controller" 00:06:10.490 }, 00:06:10.490 { 00:06:10.490 "params": { 00:06:10.490 "trtype": "pcie", 00:06:10.490 "traddr": "0000:00:11.0", 00:06:10.490 "name": "Nvme1" 00:06:10.490 }, 00:06:10.490 "method": "bdev_nvme_attach_controller" 00:06:10.490 }, 00:06:10.490 { 00:06:10.490 "method": "bdev_wait_for_examine" 00:06:10.490 } 00:06:10.490 ] 00:06:10.490 } 00:06:10.490 ] 00:06:10.490 } 00:06:10.748 [2024-11-17 13:15:59.744609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.748 [2024-11-17 13:15:59.787873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.748 [2024-11-17 13:15:59.846477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.006  [2024-11-17T13:16:00.230Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:11.006 00:06:11.006 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:11.006 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:11.006 00:06:11.006 real 0m2.943s 00:06:11.006 user 0m2.135s 00:06:11.006 sys 0m0.893s 00:06:11.006 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.006 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:11.006 ************************************ 00:06:11.006 END TEST dd_offset_magic 00:06:11.006 ************************************ 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:11.264 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.264 { 00:06:11.264 "subsystems": [ 00:06:11.264 { 00:06:11.264 "subsystem": "bdev", 00:06:11.264 "config": [ 00:06:11.264 { 00:06:11.264 "params": { 00:06:11.264 "trtype": "pcie", 00:06:11.264 "traddr": "0000:00:10.0", 00:06:11.264 "name": "Nvme0" 00:06:11.264 }, 00:06:11.264 "method": "bdev_nvme_attach_controller" 00:06:11.264 }, 00:06:11.264 { 00:06:11.264 "params": { 00:06:11.264 "trtype": "pcie", 00:06:11.264 "traddr": "0000:00:11.0", 00:06:11.264 "name": "Nvme1" 00:06:11.264 }, 00:06:11.264 "method": "bdev_nvme_attach_controller" 00:06:11.264 }, 00:06:11.264 { 00:06:11.264 "method": "bdev_wait_for_examine" 00:06:11.264 } 00:06:11.264 ] 00:06:11.264 } 00:06:11.264 ] 00:06:11.264 } 00:06:11.264 [2024-11-17 13:16:00.311486] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:11.264 [2024-11-17 13:16:00.311590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61033 ] 00:06:11.264 [2024-11-17 13:16:00.462589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.522 [2024-11-17 13:16:00.512712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.522 [2024-11-17 13:16:00.567852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.781  [2024-11-17T13:16:01.005Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:11.781 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:11.781 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:11.782 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:11.782 13:16:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.782 { 00:06:11.782 "subsystems": [ 00:06:11.782 { 00:06:11.782 "subsystem": "bdev", 00:06:11.782 "config": [ 00:06:11.782 { 00:06:11.782 "params": { 00:06:11.782 "trtype": "pcie", 00:06:11.782 "traddr": "0000:00:10.0", 00:06:11.782 "name": "Nvme0" 00:06:11.782 }, 00:06:11.782 "method": "bdev_nvme_attach_controller" 00:06:11.782 }, 00:06:11.782 { 00:06:11.782 "params": { 00:06:11.782 "trtype": "pcie", 00:06:11.782 "traddr": "0000:00:11.0", 00:06:11.782 "name": "Nvme1" 00:06:11.782 }, 00:06:11.782 "method": "bdev_nvme_attach_controller" 00:06:11.782 }, 00:06:11.782 { 00:06:11.782 "method": "bdev_wait_for_examine" 00:06:11.782 } 00:06:11.782 ] 00:06:11.782 } 00:06:11.782 ] 00:06:11.782 } 00:06:11.782 [2024-11-17 13:16:00.994333] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:11.782 [2024-11-17 13:16:00.994450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 00:06:12.040 [2024-11-17 13:16:01.143231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.040 [2024-11-17 13:16:01.190804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.040 [2024-11-17 13:16:01.246314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.299  [2024-11-17T13:16:01.782Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:12.558 00:06:12.558 13:16:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:12.558 00:06:12.558 real 0m7.268s 00:06:12.558 user 0m5.353s 00:06:12.558 sys 0m3.488s 00:06:12.558 13:16:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.558 13:16:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 ************************************ 00:06:12.558 END TEST spdk_dd_bdev_to_bdev 00:06:12.558 ************************************ 00:06:12.558 13:16:01 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:12.558 13:16:01 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:12.558 13:16:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.558 13:16:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.558 13:16:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:12.558 ************************************ 00:06:12.558 START TEST spdk_dd_uring 00:06:12.558 ************************************ 00:06:12.558 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:12.558 * Looking for test storage... 00:06:12.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.558 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.558 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.558 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.817 --rc genhtml_branch_coverage=1 00:06:12.817 --rc genhtml_function_coverage=1 00:06:12.817 --rc genhtml_legend=1 00:06:12.817 --rc geninfo_all_blocks=1 00:06:12.817 --rc geninfo_unexecuted_blocks=1 00:06:12.817 00:06:12.817 ' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.817 --rc genhtml_branch_coverage=1 00:06:12.817 --rc genhtml_function_coverage=1 00:06:12.817 --rc genhtml_legend=1 00:06:12.817 --rc geninfo_all_blocks=1 00:06:12.817 --rc geninfo_unexecuted_blocks=1 00:06:12.817 00:06:12.817 ' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.817 --rc genhtml_branch_coverage=1 00:06:12.817 --rc genhtml_function_coverage=1 00:06:12.817 --rc genhtml_legend=1 00:06:12.817 --rc geninfo_all_blocks=1 00:06:12.817 --rc geninfo_unexecuted_blocks=1 00:06:12.817 00:06:12.817 ' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.817 --rc genhtml_branch_coverage=1 00:06:12.817 --rc genhtml_function_coverage=1 00:06:12.817 --rc genhtml_legend=1 00:06:12.817 --rc geninfo_all_blocks=1 00:06:12.817 --rc geninfo_unexecuted_blocks=1 00:06:12.817 00:06:12.817 ' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:12.817 ************************************ 00:06:12.817 START TEST dd_uring_copy 00:06:12.817 ************************************ 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:12.817 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:12.818 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:12.818 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.818 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=qkx6jhuda1nkqhor6blniq7u62mzahrobajq8zily8o1tzlpuyr1kuxs1n28hpz5qy5ihjkls1xlb12fq78vor7sxc3wqiosrzhlhv6fuvrc1u2roaw25s6fb9ipsq4c0elt37hl2sovxcc3ejn17hx0m12vra6hg0ynt6yyt59xvpqxji51i7h6neopn17j3tpbcxpe0bj9t5nvoky1bava0eqvdlswa9rtu4ruliwioyahjlk6olih2se1f96w8w9etyxw0i3t6lunbd7toiklafs95g4i7gt5smu09pmesdefvn80yxenl6sjrterma3h0ohwu0m0aovmiifaj0r13a5j8bvbuyf55wyodtrk75340wacx6dguz9u3johxhxk4lbimjq4yasrbr98kdr13v6zw5ti9g499ynpdk4mltse18bz5erws3zz7v1qmv2r6ivtvgt9qs82i7frhky7jr325p04z2k4hgow53rb2szzussr41hplau3kihw56j98p4z5ukfxvw3wrnl50yzpy9ekox2r17sw7mv3rv8idp3973ivf7phlxzfw8nkxi2vwfzyo5w1k697lyofr41tbqisl655huqbau18puocac0t1yge3fdrznjfih1n23258qzssimqgn33s49fss9xmsdw4ooph1and21feogcnxnkpqqem1wz840ydpv38xko5ku7wzunz3p4fxggof1n0ez730hcf8ajoqv74yejeyrd8km4rekpaqbx9zrmpr7tm32tx6feaob25bz2qsd9nlv0vi73je18diut5eq5i4ne1bs3g5cw6fw5uryjjdg7eebcxxyr8r63hacehpdcaacyf88le9lbl85g7bgyqy7t90xehbiuap70yjoihymt4vaw3y956fp6gfcl24urx5llouyfn8qjhf1or2cns9q90k0a9x1na3hteaf9sb4g1l3vdr9p0k5j5odr9dno7stlq7ufh4xk1wr2wca0et6i51g1mhc062vg1sj 00:06:12.818 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo qkx6jhuda1nkqhor6blniq7u62mzahrobajq8zily8o1tzlpuyr1kuxs1n28hpz5qy5ihjkls1xlb12fq78vor7sxc3wqiosrzhlhv6fuvrc1u2roaw25s6fb9ipsq4c0elt37hl2sovxcc3ejn17hx0m12vra6hg0ynt6yyt59xvpqxji51i7h6neopn17j3tpbcxpe0bj9t5nvoky1bava0eqvdlswa9rtu4ruliwioyahjlk6olih2se1f96w8w9etyxw0i3t6lunbd7toiklafs95g4i7gt5smu09pmesdefvn80yxenl6sjrterma3h0ohwu0m0aovmiifaj0r13a5j8bvbuyf55wyodtrk75340wacx6dguz9u3johxhxk4lbimjq4yasrbr98kdr13v6zw5ti9g499ynpdk4mltse18bz5erws3zz7v1qmv2r6ivtvgt9qs82i7frhky7jr325p04z2k4hgow53rb2szzussr41hplau3kihw56j98p4z5ukfxvw3wrnl50yzpy9ekox2r17sw7mv3rv8idp3973ivf7phlxzfw8nkxi2vwfzyo5w1k697lyofr41tbqisl655huqbau18puocac0t1yge3fdrznjfih1n23258qzssimqgn33s49fss9xmsdw4ooph1and21feogcnxnkpqqem1wz840ydpv38xko5ku7wzunz3p4fxggof1n0ez730hcf8ajoqv74yejeyrd8km4rekpaqbx9zrmpr7tm32tx6feaob25bz2qsd9nlv0vi73je18diut5eq5i4ne1bs3g5cw6fw5uryjjdg7eebcxxyr8r63hacehpdcaacyf88le9lbl85g7bgyqy7t90xehbiuap70yjoihymt4vaw3y956fp6gfcl24urx5llouyfn8qjhf1or2cns9q90k0a9x1na3hteaf9sb4g1l3vdr9p0k5j5odr9dno7stlq7ufh4xk1wr2wca0et6i51g1mhc062vg1sj 00:06:12.818 13:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:12.818 [2024-11-17 13:16:01.969958] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:12.818 [2024-11-17 13:16:01.970089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61128 ] 00:06:13.076 [2024-11-17 13:16:02.116662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.076 [2024-11-17 13:16:02.173747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.076 [2024-11-17 13:16:02.228202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.035  [2024-11-17T13:16:03.518Z] Copying: 511/511 [MB] (average 1068 MBps) 00:06:14.294 00:06:14.294 13:16:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:14.294 13:16:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:14.295 13:16:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:14.295 13:16:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.295 [2024-11-17 13:16:03.357680] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:14.295 [2024-11-17 13:16:03.357831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:06:14.295 { 00:06:14.295 "subsystems": [ 00:06:14.295 { 00:06:14.295 "subsystem": "bdev", 00:06:14.295 "config": [ 00:06:14.295 { 00:06:14.295 "params": { 00:06:14.295 "block_size": 512, 00:06:14.295 "num_blocks": 1048576, 00:06:14.295 "name": "malloc0" 00:06:14.295 }, 00:06:14.295 "method": "bdev_malloc_create" 00:06:14.295 }, 00:06:14.295 { 00:06:14.295 "params": { 00:06:14.295 "filename": "/dev/zram1", 00:06:14.295 "name": "uring0" 00:06:14.295 }, 00:06:14.295 "method": "bdev_uring_create" 00:06:14.295 }, 00:06:14.295 { 00:06:14.295 "method": "bdev_wait_for_examine" 00:06:14.295 } 00:06:14.295 ] 00:06:14.295 } 00:06:14.295 ] 00:06:14.295 } 00:06:14.295 [2024-11-17 13:16:03.496457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.554 [2024-11-17 13:16:03.555797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.554 [2024-11-17 13:16:03.610036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.930  [2024-11-17T13:16:06.089Z] Copying: 229/512 [MB] (229 MBps) [2024-11-17T13:16:06.089Z] Copying: 464/512 [MB] (234 MBps) [2024-11-17T13:16:06.655Z] Copying: 512/512 [MB] (average 232 MBps) 00:06:17.431 00:06:17.431 13:16:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:17.431 13:16:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:17.431 13:16:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:17.432 13:16:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.432 [2024-11-17 13:16:06.464078] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:17.432 [2024-11-17 13:16:06.464245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61195 ] 00:06:17.432 { 00:06:17.432 "subsystems": [ 00:06:17.432 { 00:06:17.432 "subsystem": "bdev", 00:06:17.432 "config": [ 00:06:17.432 { 00:06:17.432 "params": { 00:06:17.432 "block_size": 512, 00:06:17.432 "num_blocks": 1048576, 00:06:17.432 "name": "malloc0" 00:06:17.432 }, 00:06:17.432 "method": "bdev_malloc_create" 00:06:17.432 }, 00:06:17.432 { 00:06:17.432 "params": { 00:06:17.432 "filename": "/dev/zram1", 00:06:17.432 "name": "uring0" 00:06:17.432 }, 00:06:17.432 "method": "bdev_uring_create" 00:06:17.432 }, 00:06:17.432 { 00:06:17.432 "method": "bdev_wait_for_examine" 00:06:17.432 } 00:06:17.432 ] 00:06:17.432 } 00:06:17.432 ] 00:06:17.432 } 00:06:17.432 [2024-11-17 13:16:06.609143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.690 [2024-11-17 13:16:06.652474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.690 [2024-11-17 13:16:06.703462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.068  [2024-11-17T13:16:09.228Z] Copying: 177/512 [MB] (177 MBps) [2024-11-17T13:16:10.164Z] Copying: 316/512 [MB] (138 MBps) [2024-11-17T13:16:10.164Z] Copying: 473/512 [MB] (157 MBps) [2024-11-17T13:16:10.733Z] Copying: 512/512 [MB] (average 159 MBps) 00:06:21.509 00:06:21.509 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:21.509 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ qkx6jhuda1nkqhor6blniq7u62mzahrobajq8zily8o1tzlpuyr1kuxs1n28hpz5qy5ihjkls1xlb12fq78vor7sxc3wqiosrzhlhv6fuvrc1u2roaw25s6fb9ipsq4c0elt37hl2sovxcc3ejn17hx0m12vra6hg0ynt6yyt59xvpqxji51i7h6neopn17j3tpbcxpe0bj9t5nvoky1bava0eqvdlswa9rtu4ruliwioyahjlk6olih2se1f96w8w9etyxw0i3t6lunbd7toiklafs95g4i7gt5smu09pmesdefvn80yxenl6sjrterma3h0ohwu0m0aovmiifaj0r13a5j8bvbuyf55wyodtrk75340wacx6dguz9u3johxhxk4lbimjq4yasrbr98kdr13v6zw5ti9g499ynpdk4mltse18bz5erws3zz7v1qmv2r6ivtvgt9qs82i7frhky7jr325p04z2k4hgow53rb2szzussr41hplau3kihw56j98p4z5ukfxvw3wrnl50yzpy9ekox2r17sw7mv3rv8idp3973ivf7phlxzfw8nkxi2vwfzyo5w1k697lyofr41tbqisl655huqbau18puocac0t1yge3fdrznjfih1n23258qzssimqgn33s49fss9xmsdw4ooph1and21feogcnxnkpqqem1wz840ydpv38xko5ku7wzunz3p4fxggof1n0ez730hcf8ajoqv74yejeyrd8km4rekpaqbx9zrmpr7tm32tx6feaob25bz2qsd9nlv0vi73je18diut5eq5i4ne1bs3g5cw6fw5uryjjdg7eebcxxyr8r63hacehpdcaacyf88le9lbl85g7bgyqy7t90xehbiuap70yjoihymt4vaw3y956fp6gfcl24urx5llouyfn8qjhf1or2cns9q90k0a9x1na3hteaf9sb4g1l3vdr9p0k5j5odr9dno7stlq7ufh4xk1wr2wca0et6i51g1mhc062vg1sj == \q\k\x\6\j\h\u\d\a\1\n\k\q\h\o\r\6\b\l\n\i\q\7\u\6\2\m\z\a\h\r\o\b\a\j\q\8\z\i\l\y\8\o\1\t\z\l\p\u\y\r\1\k\u\x\s\1\n\2\8\h\p\z\5\q\y\5\i\h\j\k\l\s\1\x\l\b\1\2\f\q\7\8\v\o\r\7\s\x\c\3\w\q\i\o\s\r\z\h\l\h\v\6\f\u\v\r\c\1\u\2\r\o\a\w\2\5\s\6\f\b\9\i\p\s\q\4\c\0\e\l\t\3\7\h\l\2\s\o\v\x\c\c\3\e\j\n\1\7\h\x\0\m\1\2\v\r\a\6\h\g\0\y\n\t\6\y\y\t\5\9\x\v\p\q\x\j\i\5\1\i\7\h\6\n\e\o\p\n\1\7\j\3\t\p\b\c\x\p\e\0\b\j\9\t\5\n\v\o\k\y\1\b\a\v\a\0\e\q\v\d\l\s\w\a\9\r\t\u\4\r\u\l\i\w\i\o\y\a\h\j\l\k\6\o\l\i\h\2\s\e\1\f\9\6\w\8\w\9\e\t\y\x\w\0\i\3\t\6\l\u\n\b\d\7\t\o\i\k\l\a\f\s\9\5\g\4\i\7\g\t\5\s\m\u\0\9\p\m\e\s\d\e\f\v\n\8\0\y\x\e\n\l\6\s\j\r\t\e\r\m\a\3\h\0\o\h\w\u\0\m\0\a\o\v\m\i\i\f\a\j\0\r\1\3\a\5\j\8\b\v\b\u\y\f\5\5\w\y\o\d\t\r\k\7\5\3\4\0\w\a\c\x\6\d\g\u\z\9\u\3\j\o\h\x\h\x\k\4\l\b\i\m\j\q\4\y\a\s\r\b\r\9\8\k\d\r\1\3\v\6\z\w\5\t\i\9\g\4\9\9\y\n\p\d\k\4\m\l\t\s\e\1\8\b\z\5\e\r\w\s\3\z\z\7\v\1\q\m\v\2\r\6\i\v\t\v\g\t\9\q\s\8\2\i\7\f\r\h\k\y\7\j\r\3\2\5\p\0\4\z\2\k\4\h\g\o\w\5\3\r\b\2\s\z\z\u\s\s\r\4\1\h\p\l\a\u\3\k\i\h\w\5\6\j\9\8\p\4\z\5\u\k\f\x\v\w\3\w\r\n\l\5\0\y\z\p\y\9\e\k\o\x\2\r\1\7\s\w\7\m\v\3\r\v\8\i\d\p\3\9\7\3\i\v\f\7\p\h\l\x\z\f\w\8\n\k\x\i\2\v\w\f\z\y\o\5\w\1\k\6\9\7\l\y\o\f\r\4\1\t\b\q\i\s\l\6\5\5\h\u\q\b\a\u\1\8\p\u\o\c\a\c\0\t\1\y\g\e\3\f\d\r\z\n\j\f\i\h\1\n\2\3\2\5\8\q\z\s\s\i\m\q\g\n\3\3\s\4\9\f\s\s\9\x\m\s\d\w\4\o\o\p\h\1\a\n\d\2\1\f\e\o\g\c\n\x\n\k\p\q\q\e\m\1\w\z\8\4\0\y\d\p\v\3\8\x\k\o\5\k\u\7\w\z\u\n\z\3\p\4\f\x\g\g\o\f\1\n\0\e\z\7\3\0\h\c\f\8\a\j\o\q\v\7\4\y\e\j\e\y\r\d\8\k\m\4\r\e\k\p\a\q\b\x\9\z\r\m\p\r\7\t\m\3\2\t\x\6\f\e\a\o\b\2\5\b\z\2\q\s\d\9\n\l\v\0\v\i\7\3\j\e\1\8\d\i\u\t\5\e\q\5\i\4\n\e\1\b\s\3\g\5\c\w\6\f\w\5\u\r\y\j\j\d\g\7\e\e\b\c\x\x\y\r\8\r\6\3\h\a\c\e\h\p\d\c\a\a\c\y\f\8\8\l\e\9\l\b\l\8\5\g\7\b\g\y\q\y\7\t\9\0\x\e\h\b\i\u\a\p\7\0\y\j\o\i\h\y\m\t\4\v\a\w\3\y\9\5\6\f\p\6\g\f\c\l\2\4\u\r\x\5\l\l\o\u\y\f\n\8\q\j\h\f\1\o\r\2\c\n\s\9\q\9\0\k\0\a\9\x\1\n\a\3\h\t\e\a\f\9\s\b\4\g\1\l\3\v\d\r\9\p\0\k\5\j\5\o\d\r\9\d\n\o\7\s\t\l\q\7\u\f\h\4\x\k\1\w\r\2\w\c\a\0\e\t\6\i\5\1\g\1\m\h\c\0\6\2\v\g\1\s\j ]] 00:06:21.509 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:21.510 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ qkx6jhuda1nkqhor6blniq7u62mzahrobajq8zily8o1tzlpuyr1kuxs1n28hpz5qy5ihjkls1xlb12fq78vor7sxc3wqiosrzhlhv6fuvrc1u2roaw25s6fb9ipsq4c0elt37hl2sovxcc3ejn17hx0m12vra6hg0ynt6yyt59xvpqxji51i7h6neopn17j3tpbcxpe0bj9t5nvoky1bava0eqvdlswa9rtu4ruliwioyahjlk6olih2se1f96w8w9etyxw0i3t6lunbd7toiklafs95g4i7gt5smu09pmesdefvn80yxenl6sjrterma3h0ohwu0m0aovmiifaj0r13a5j8bvbuyf55wyodtrk75340wacx6dguz9u3johxhxk4lbimjq4yasrbr98kdr13v6zw5ti9g499ynpdk4mltse18bz5erws3zz7v1qmv2r6ivtvgt9qs82i7frhky7jr325p04z2k4hgow53rb2szzussr41hplau3kihw56j98p4z5ukfxvw3wrnl50yzpy9ekox2r17sw7mv3rv8idp3973ivf7phlxzfw8nkxi2vwfzyo5w1k697lyofr41tbqisl655huqbau18puocac0t1yge3fdrznjfih1n23258qzssimqgn33s49fss9xmsdw4ooph1and21feogcnxnkpqqem1wz840ydpv38xko5ku7wzunz3p4fxggof1n0ez730hcf8ajoqv74yejeyrd8km4rekpaqbx9zrmpr7tm32tx6feaob25bz2qsd9nlv0vi73je18diut5eq5i4ne1bs3g5cw6fw5uryjjdg7eebcxxyr8r63hacehpdcaacyf88le9lbl85g7bgyqy7t90xehbiuap70yjoihymt4vaw3y956fp6gfcl24urx5llouyfn8qjhf1or2cns9q90k0a9x1na3hteaf9sb4g1l3vdr9p0k5j5odr9dno7stlq7ufh4xk1wr2wca0et6i51g1mhc062vg1sj == \q\k\x\6\j\h\u\d\a\1\n\k\q\h\o\r\6\b\l\n\i\q\7\u\6\2\m\z\a\h\r\o\b\a\j\q\8\z\i\l\y\8\o\1\t\z\l\p\u\y\r\1\k\u\x\s\1\n\2\8\h\p\z\5\q\y\5\i\h\j\k\l\s\1\x\l\b\1\2\f\q\7\8\v\o\r\7\s\x\c\3\w\q\i\o\s\r\z\h\l\h\v\6\f\u\v\r\c\1\u\2\r\o\a\w\2\5\s\6\f\b\9\i\p\s\q\4\c\0\e\l\t\3\7\h\l\2\s\o\v\x\c\c\3\e\j\n\1\7\h\x\0\m\1\2\v\r\a\6\h\g\0\y\n\t\6\y\y\t\5\9\x\v\p\q\x\j\i\5\1\i\7\h\6\n\e\o\p\n\1\7\j\3\t\p\b\c\x\p\e\0\b\j\9\t\5\n\v\o\k\y\1\b\a\v\a\0\e\q\v\d\l\s\w\a\9\r\t\u\4\r\u\l\i\w\i\o\y\a\h\j\l\k\6\o\l\i\h\2\s\e\1\f\9\6\w\8\w\9\e\t\y\x\w\0\i\3\t\6\l\u\n\b\d\7\t\o\i\k\l\a\f\s\9\5\g\4\i\7\g\t\5\s\m\u\0\9\p\m\e\s\d\e\f\v\n\8\0\y\x\e\n\l\6\s\j\r\t\e\r\m\a\3\h\0\o\h\w\u\0\m\0\a\o\v\m\i\i\f\a\j\0\r\1\3\a\5\j\8\b\v\b\u\y\f\5\5\w\y\o\d\t\r\k\7\5\3\4\0\w\a\c\x\6\d\g\u\z\9\u\3\j\o\h\x\h\x\k\4\l\b\i\m\j\q\4\y\a\s\r\b\r\9\8\k\d\r\1\3\v\6\z\w\5\t\i\9\g\4\9\9\y\n\p\d\k\4\m\l\t\s\e\1\8\b\z\5\e\r\w\s\3\z\z\7\v\1\q\m\v\2\r\6\i\v\t\v\g\t\9\q\s\8\2\i\7\f\r\h\k\y\7\j\r\3\2\5\p\0\4\z\2\k\4\h\g\o\w\5\3\r\b\2\s\z\z\u\s\s\r\4\1\h\p\l\a\u\3\k\i\h\w\5\6\j\9\8\p\4\z\5\u\k\f\x\v\w\3\w\r\n\l\5\0\y\z\p\y\9\e\k\o\x\2\r\1\7\s\w\7\m\v\3\r\v\8\i\d\p\3\9\7\3\i\v\f\7\p\h\l\x\z\f\w\8\n\k\x\i\2\v\w\f\z\y\o\5\w\1\k\6\9\7\l\y\o\f\r\4\1\t\b\q\i\s\l\6\5\5\h\u\q\b\a\u\1\8\p\u\o\c\a\c\0\t\1\y\g\e\3\f\d\r\z\n\j\f\i\h\1\n\2\3\2\5\8\q\z\s\s\i\m\q\g\n\3\3\s\4\9\f\s\s\9\x\m\s\d\w\4\o\o\p\h\1\a\n\d\2\1\f\e\o\g\c\n\x\n\k\p\q\q\e\m\1\w\z\8\4\0\y\d\p\v\3\8\x\k\o\5\k\u\7\w\z\u\n\z\3\p\4\f\x\g\g\o\f\1\n\0\e\z\7\3\0\h\c\f\8\a\j\o\q\v\7\4\y\e\j\e\y\r\d\8\k\m\4\r\e\k\p\a\q\b\x\9\z\r\m\p\r\7\t\m\3\2\t\x\6\f\e\a\o\b\2\5\b\z\2\q\s\d\9\n\l\v\0\v\i\7\3\j\e\1\8\d\i\u\t\5\e\q\5\i\4\n\e\1\b\s\3\g\5\c\w\6\f\w\5\u\r\y\j\j\d\g\7\e\e\b\c\x\x\y\r\8\r\6\3\h\a\c\e\h\p\d\c\a\a\c\y\f\8\8\l\e\9\l\b\l\8\5\g\7\b\g\y\q\y\7\t\9\0\x\e\h\b\i\u\a\p\7\0\y\j\o\i\h\y\m\t\4\v\a\w\3\y\9\5\6\f\p\6\g\f\c\l\2\4\u\r\x\5\l\l\o\u\y\f\n\8\q\j\h\f\1\o\r\2\c\n\s\9\q\9\0\k\0\a\9\x\1\n\a\3\h\t\e\a\f\9\s\b\4\g\1\l\3\v\d\r\9\p\0\k\5\j\5\o\d\r\9\d\n\o\7\s\t\l\q\7\u\f\h\4\x\k\1\w\r\2\w\c\a\0\e\t\6\i\5\1\g\1\m\h\c\0\6\2\v\g\1\s\j ]] 00:06:21.510 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:21.768 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:21.769 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:21.769 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:21.769 13:16:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:21.769 [2024-11-17 13:16:10.846547] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:21.769 [2024-11-17 13:16:10.846655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:06:21.769 { 00:06:21.769 "subsystems": [ 00:06:21.769 { 00:06:21.769 "subsystem": "bdev", 00:06:21.769 "config": [ 00:06:21.769 { 00:06:21.769 "params": { 00:06:21.769 "block_size": 512, 00:06:21.769 "num_blocks": 1048576, 00:06:21.769 "name": "malloc0" 00:06:21.769 }, 00:06:21.769 "method": "bdev_malloc_create" 00:06:21.769 }, 00:06:21.769 { 00:06:21.769 "params": { 00:06:21.769 "filename": "/dev/zram1", 00:06:21.769 "name": "uring0" 00:06:21.769 }, 00:06:21.769 "method": "bdev_uring_create" 00:06:21.769 }, 00:06:21.769 { 00:06:21.769 "method": "bdev_wait_for_examine" 00:06:21.769 } 00:06:21.769 ] 00:06:21.769 } 00:06:21.769 ] 00:06:21.769 } 00:06:21.769 [2024-11-17 13:16:10.989280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.027 [2024-11-17 13:16:11.040239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.027 [2024-11-17 13:16:11.093597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.409  [2024-11-17T13:16:13.587Z] Copying: 166/512 [MB] (166 MBps) [2024-11-17T13:16:14.523Z] Copying: 331/512 [MB] (165 MBps) [2024-11-17T13:16:14.523Z] Copying: 500/512 [MB] (169 MBps) [2024-11-17T13:16:14.782Z] Copying: 512/512 [MB] (average 166 MBps) 00:06:25.558 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:25.558 13:16:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 [2024-11-17 13:16:14.804706] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:25.817 [2024-11-17 13:16:14.804851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:06:25.817 { 00:06:25.817 "subsystems": [ 00:06:25.817 { 00:06:25.817 "subsystem": "bdev", 00:06:25.817 "config": [ 00:06:25.817 { 00:06:25.817 "params": { 00:06:25.817 "block_size": 512, 00:06:25.817 "num_blocks": 1048576, 00:06:25.817 "name": "malloc0" 00:06:25.817 }, 00:06:25.817 "method": "bdev_malloc_create" 00:06:25.817 }, 00:06:25.817 { 00:06:25.817 "params": { 00:06:25.817 "filename": "/dev/zram1", 00:06:25.817 "name": "uring0" 00:06:25.817 }, 00:06:25.817 "method": "bdev_uring_create" 00:06:25.817 }, 00:06:25.817 { 00:06:25.817 "params": { 00:06:25.817 "name": "uring0" 00:06:25.817 }, 00:06:25.817 "method": "bdev_uring_delete" 00:06:25.817 }, 00:06:25.817 { 00:06:25.817 "method": "bdev_wait_for_examine" 00:06:25.817 } 00:06:25.817 ] 00:06:25.817 } 00:06:25.817 ] 00:06:25.817 } 00:06:25.817 [2024-11-17 13:16:14.953128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.817 [2024-11-17 13:16:15.005327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.075 [2024-11-17 13:16:15.068157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.334  [2024-11-17T13:16:15.817Z] Copying: 0/0 [B] (average 0 Bps) 00:06:26.593 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.593 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.594 13:16:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:26.594 [2024-11-17 13:16:15.753701] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:26.594 [2024-11-17 13:16:15.753859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61352 ] 00:06:26.594 { 00:06:26.594 "subsystems": [ 00:06:26.594 { 00:06:26.594 "subsystem": "bdev", 00:06:26.594 "config": [ 00:06:26.594 { 00:06:26.594 "params": { 00:06:26.594 "block_size": 512, 00:06:26.594 "num_blocks": 1048576, 00:06:26.594 "name": "malloc0" 00:06:26.594 }, 00:06:26.594 "method": "bdev_malloc_create" 00:06:26.594 }, 00:06:26.594 { 00:06:26.594 "params": { 00:06:26.594 "filename": "/dev/zram1", 00:06:26.594 "name": "uring0" 00:06:26.594 }, 00:06:26.594 "method": "bdev_uring_create" 00:06:26.594 }, 00:06:26.594 { 00:06:26.594 "params": { 00:06:26.594 "name": "uring0" 00:06:26.594 }, 00:06:26.594 "method": "bdev_uring_delete" 00:06:26.594 }, 00:06:26.594 { 00:06:26.594 "method": "bdev_wait_for_examine" 00:06:26.594 } 00:06:26.594 ] 00:06:26.594 } 00:06:26.594 ] 00:06:26.594 } 00:06:26.852 [2024-11-17 13:16:15.898557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.852 [2024-11-17 13:16:15.948145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.852 [2024-11-17 13:16:16.003393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.111 [2024-11-17 13:16:16.207845] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:27.111 [2024-11-17 13:16:16.207944] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:27.111 [2024-11-17 13:16:16.207956] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:27.111 [2024-11-17 13:16:16.207973] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.370 [2024-11-17 13:16:16.524647] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:27.629 00:06:27.629 real 0m14.949s 00:06:27.629 user 0m9.931s 00:06:27.629 sys 0m13.115s 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.629 13:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.629 ************************************ 00:06:27.629 END TEST dd_uring_copy 00:06:27.629 ************************************ 00:06:27.888 00:06:27.889 real 0m15.185s 00:06:27.889 user 0m10.041s 00:06:27.889 sys 0m13.241s 00:06:27.889 13:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.889 13:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:27.889 ************************************ 00:06:27.889 END TEST spdk_dd_uring 00:06:27.889 ************************************ 00:06:27.889 13:16:16 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:27.889 13:16:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.889 13:16:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.889 13:16:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.889 ************************************ 00:06:27.889 START TEST spdk_dd_sparse 00:06:27.889 ************************************ 00:06:27.889 13:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:27.889 * Looking for test storage... 00:06:27.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.889 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:28.148 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.149 --rc genhtml_branch_coverage=1 00:06:28.149 --rc genhtml_function_coverage=1 00:06:28.149 --rc genhtml_legend=1 00:06:28.149 --rc geninfo_all_blocks=1 00:06:28.149 --rc geninfo_unexecuted_blocks=1 00:06:28.149 00:06:28.149 ' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.149 --rc genhtml_branch_coverage=1 00:06:28.149 --rc genhtml_function_coverage=1 00:06:28.149 --rc genhtml_legend=1 00:06:28.149 --rc geninfo_all_blocks=1 00:06:28.149 --rc geninfo_unexecuted_blocks=1 00:06:28.149 00:06:28.149 ' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.149 --rc genhtml_branch_coverage=1 00:06:28.149 --rc genhtml_function_coverage=1 00:06:28.149 --rc genhtml_legend=1 00:06:28.149 --rc geninfo_all_blocks=1 00:06:28.149 --rc geninfo_unexecuted_blocks=1 00:06:28.149 00:06:28.149 ' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.149 --rc genhtml_branch_coverage=1 00:06:28.149 --rc genhtml_function_coverage=1 00:06:28.149 --rc genhtml_legend=1 00:06:28.149 --rc geninfo_all_blocks=1 00:06:28.149 --rc geninfo_unexecuted_blocks=1 00:06:28.149 00:06:28.149 ' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:28.149 1+0 records in 00:06:28.149 1+0 records out 00:06:28.149 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00697679 s, 601 MB/s 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:28.149 1+0 records in 00:06:28.149 1+0 records out 00:06:28.149 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00562493 s, 746 MB/s 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:28.149 1+0 records in 00:06:28.149 1+0 records out 00:06:28.149 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00892157 s, 470 MB/s 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 ************************************ 00:06:28.149 START TEST dd_sparse_file_to_file 00:06:28.149 ************************************ 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:28.149 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:28.149 [2024-11-17 13:16:17.232396] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:28.149 [2024-11-17 13:16:17.232489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:06:28.149 { 00:06:28.149 "subsystems": [ 00:06:28.149 { 00:06:28.149 "subsystem": "bdev", 00:06:28.149 "config": [ 00:06:28.149 { 00:06:28.149 "params": { 00:06:28.149 "block_size": 4096, 00:06:28.149 "filename": "dd_sparse_aio_disk", 00:06:28.149 "name": "dd_aio" 00:06:28.149 }, 00:06:28.149 "method": "bdev_aio_create" 00:06:28.149 }, 00:06:28.149 { 00:06:28.149 "params": { 00:06:28.149 "lvs_name": "dd_lvstore", 00:06:28.149 "bdev_name": "dd_aio" 00:06:28.149 }, 00:06:28.149 "method": "bdev_lvol_create_lvstore" 00:06:28.149 }, 00:06:28.149 { 00:06:28.149 "method": "bdev_wait_for_examine" 00:06:28.149 } 00:06:28.149 ] 00:06:28.149 } 00:06:28.149 ] 00:06:28.149 } 00:06:28.408 [2024-11-17 13:16:17.384014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.408 [2024-11-17 13:16:17.450827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.408 [2024-11-17 13:16:17.513872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.408  [2024-11-17T13:16:17.891Z] Copying: 12/36 [MB] (average 705 MBps) 00:06:28.667 00:06:28.667 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:28.667 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:28.667 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:28.926 00:06:28.926 real 0m0.733s 00:06:28.926 user 0m0.457s 00:06:28.926 sys 0m0.393s 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:28.926 ************************************ 00:06:28.926 END TEST dd_sparse_file_to_file 00:06:28.926 ************************************ 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:28.926 ************************************ 00:06:28.926 START TEST dd_sparse_file_to_bdev 00:06:28.926 ************************************ 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:28.926 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:28.927 13:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:28.927 [2024-11-17 13:16:18.018515] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:28.927 [2024-11-17 13:16:18.018619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:06:28.927 { 00:06:28.927 "subsystems": [ 00:06:28.927 { 00:06:28.927 "subsystem": "bdev", 00:06:28.927 "config": [ 00:06:28.927 { 00:06:28.927 "params": { 00:06:28.927 "block_size": 4096, 00:06:28.927 "filename": "dd_sparse_aio_disk", 00:06:28.927 "name": "dd_aio" 00:06:28.927 }, 00:06:28.927 "method": "bdev_aio_create" 00:06:28.927 }, 00:06:28.927 { 00:06:28.927 "params": { 00:06:28.927 "lvs_name": "dd_lvstore", 00:06:28.927 "lvol_name": "dd_lvol", 00:06:28.927 "size_in_mib": 36, 00:06:28.927 "thin_provision": true 00:06:28.927 }, 00:06:28.927 "method": "bdev_lvol_create" 00:06:28.927 }, 00:06:28.927 { 00:06:28.927 "method": "bdev_wait_for_examine" 00:06:28.927 } 00:06:28.927 ] 00:06:28.927 } 00:06:28.927 ] 00:06:28.927 } 00:06:29.186 [2024-11-17 13:16:18.170475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.186 [2024-11-17 13:16:18.238869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.186 [2024-11-17 13:16:18.301008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.186  [2024-11-17T13:16:18.704Z] Copying: 12/36 [MB] (average 461 MBps) 00:06:29.480 00:06:29.480 00:06:29.480 real 0m0.670s 00:06:29.480 user 0m0.426s 00:06:29.480 sys 0m0.375s 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:29.480 ************************************ 00:06:29.480 END TEST dd_sparse_file_to_bdev 00:06:29.480 ************************************ 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.480 13:16:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:29.739 ************************************ 00:06:29.739 START TEST dd_sparse_bdev_to_file 00:06:29.739 ************************************ 00:06:29.739 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:29.739 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:29.739 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:29.740 13:16:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:29.740 [2024-11-17 13:16:18.739605] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:29.740 [2024-11-17 13:16:18.739699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61527 ] 00:06:29.740 { 00:06:29.740 "subsystems": [ 00:06:29.740 { 00:06:29.740 "subsystem": "bdev", 00:06:29.740 "config": [ 00:06:29.740 { 00:06:29.740 "params": { 00:06:29.740 "block_size": 4096, 00:06:29.740 "filename": "dd_sparse_aio_disk", 00:06:29.740 "name": "dd_aio" 00:06:29.740 }, 00:06:29.740 "method": "bdev_aio_create" 00:06:29.740 }, 00:06:29.740 { 00:06:29.740 "method": "bdev_wait_for_examine" 00:06:29.740 } 00:06:29.740 ] 00:06:29.740 } 00:06:29.740 ] 00:06:29.740 } 00:06:29.740 [2024-11-17 13:16:18.893255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.998 [2024-11-17 13:16:18.963673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.998 [2024-11-17 13:16:19.023840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.998  [2024-11-17T13:16:19.481Z] Copying: 12/36 [MB] (average 857 MBps) 00:06:30.257 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:30.257 00:06:30.257 real 0m0.686s 00:06:30.257 user 0m0.420s 00:06:30.257 sys 0m0.389s 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 ************************************ 00:06:30.257 END TEST dd_sparse_bdev_to_file 00:06:30.257 ************************************ 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:30.257 00:06:30.257 real 0m2.512s 00:06:30.257 user 0m1.484s 00:06:30.257 sys 0m1.396s 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.257 13:16:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:30.257 ************************************ 00:06:30.257 END TEST spdk_dd_sparse 00:06:30.257 ************************************ 00:06:30.257 13:16:19 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:30.257 13:16:19 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.257 13:16:19 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.257 13:16:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.517 ************************************ 00:06:30.517 START TEST spdk_dd_negative 00:06:30.517 ************************************ 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:30.517 * Looking for test storage... 00:06:30.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.517 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.518 --rc genhtml_branch_coverage=1 00:06:30.518 --rc genhtml_function_coverage=1 00:06:30.518 --rc genhtml_legend=1 00:06:30.518 --rc geninfo_all_blocks=1 00:06:30.518 --rc geninfo_unexecuted_blocks=1 00:06:30.518 00:06:30.518 ' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.518 --rc genhtml_branch_coverage=1 00:06:30.518 --rc genhtml_function_coverage=1 00:06:30.518 --rc genhtml_legend=1 00:06:30.518 --rc geninfo_all_blocks=1 00:06:30.518 --rc geninfo_unexecuted_blocks=1 00:06:30.518 00:06:30.518 ' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.518 --rc genhtml_branch_coverage=1 00:06:30.518 --rc genhtml_function_coverage=1 00:06:30.518 --rc genhtml_legend=1 00:06:30.518 --rc geninfo_all_blocks=1 00:06:30.518 --rc geninfo_unexecuted_blocks=1 00:06:30.518 00:06:30.518 ' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.518 --rc genhtml_branch_coverage=1 00:06:30.518 --rc genhtml_function_coverage=1 00:06:30.518 --rc genhtml_legend=1 00:06:30.518 --rc geninfo_all_blocks=1 00:06:30.518 --rc geninfo_unexecuted_blocks=1 00:06:30.518 00:06:30.518 ' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.518 ************************************ 00:06:30.518 START TEST dd_invalid_arguments 00:06:30.518 ************************************ 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.518 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:30.778 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:30.778 00:06:30.778 CPU options: 00:06:30.778 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:30.778 (like [0,1,10]) 00:06:30.778 --lcores lcore to CPU mapping list. The list is in the format: 00:06:30.778 [<,lcores[@CPUs]>...] 00:06:30.778 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:30.778 Within the group, '-' is used for range separator, 00:06:30.778 ',' is used for single number separator. 00:06:30.778 '( )' can be omitted for single element group, 00:06:30.778 '@' can be omitted if cpus and lcores have the same value 00:06:30.778 --disable-cpumask-locks Disable CPU core lock files. 00:06:30.778 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:30.778 pollers in the app support interrupt mode) 00:06:30.778 -p, --main-core main (primary) core for DPDK 00:06:30.778 00:06:30.778 Configuration options: 00:06:30.778 -c, --config, --json JSON config file 00:06:30.778 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:30.778 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:30.778 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:30.778 --rpcs-allowed comma-separated list of permitted RPCS 00:06:30.778 --json-ignore-init-errors don't exit on invalid config entry 00:06:30.778 00:06:30.778 Memory options: 00:06:30.778 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:30.778 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:30.778 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:30.778 -R, --huge-unlink unlink huge files after initialization 00:06:30.778 -n, --mem-channels number of memory channels used for DPDK 00:06:30.778 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:30.778 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:30.778 --no-huge run without using hugepages 00:06:30.778 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:30.778 -i, --shm-id shared memory ID (optional) 00:06:30.778 -g, --single-file-segments force creating just one hugetlbfs file 00:06:30.778 00:06:30.778 PCI options: 00:06:30.779 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:30.779 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:30.779 -u, --no-pci disable PCI access 00:06:30.779 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:30.779 00:06:30.779 Log options: 00:06:30.779 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:30.779 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:30.779 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:30.779 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:30.779 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:30.779 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:30.779 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:30.779 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:30.779 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:30.779 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:30.779 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:30.779 --silence-noticelog disable notice level logging to stderr 00:06:30.779 00:06:30.779 Trace options: 00:06:30.779 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:30.779 setting 0 to disable trace (default 32768) 00:06:30.779 Tracepoints vary in size and can use more than one trace entry. 00:06:30.779 -e, --tpoint-group [:] 00:06:30.779 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:30.779 [2024-11-17 13:16:19.748017] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:30.779 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:30.779 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:30.779 bdev_raid, scheduler, all). 00:06:30.779 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:30.779 a tracepoint group. First tpoint inside a group can be enabled by 00:06:30.779 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:30.779 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:30.779 in /include/spdk_internal/trace_defs.h 00:06:30.779 00:06:30.779 Other options: 00:06:30.779 -h, --help show this usage 00:06:30.779 -v, --version print SPDK version 00:06:30.779 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:30.779 --env-context Opaque context for use of the env implementation 00:06:30.779 00:06:30.779 Application specific: 00:06:30.779 [--------- DD Options ---------] 00:06:30.779 --if Input file. Must specify either --if or --ib. 00:06:30.779 --ib Input bdev. Must specifier either --if or --ib 00:06:30.779 --of Output file. Must specify either --of or --ob. 00:06:30.779 --ob Output bdev. Must specify either --of or --ob. 00:06:30.779 --iflag Input file flags. 00:06:30.779 --oflag Output file flags. 00:06:30.779 --bs I/O unit size (default: 4096) 00:06:30.779 --qd Queue depth (default: 2) 00:06:30.779 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:30.779 --skip Skip this many I/O units at start of input. (default: 0) 00:06:30.779 --seek Skip this many I/O units at start of output. (default: 0) 00:06:30.779 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:30.779 --sparse Enable hole skipping in input target 00:06:30.779 Available iflag and oflag values: 00:06:30.779 append - append mode 00:06:30.779 direct - use direct I/O for data 00:06:30.779 directory - fail unless a directory 00:06:30.779 dsync - use synchronized I/O for data 00:06:30.779 noatime - do not update access time 00:06:30.779 noctty - do not assign controlling terminal from file 00:06:30.779 nofollow - do not follow symlinks 00:06:30.779 nonblock - use non-blocking I/O 00:06:30.779 sync - use synchronized I/O for data and metadata 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.779 00:06:30.779 real 0m0.074s 00:06:30.779 user 0m0.052s 00:06:30.779 sys 0m0.021s 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:30.779 ************************************ 00:06:30.779 END TEST dd_invalid_arguments 00:06:30.779 ************************************ 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.779 ************************************ 00:06:30.779 START TEST dd_double_input 00:06:30.779 ************************************ 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.779 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:30.780 [2024-11-17 13:16:19.869278] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.780 00:06:30.780 real 0m0.071s 00:06:30.780 user 0m0.044s 00:06:30.780 sys 0m0.022s 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:30.780 ************************************ 00:06:30.780 END TEST dd_double_input 00:06:30.780 ************************************ 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:30.780 ************************************ 00:06:30.780 START TEST dd_double_output 00:06:30.780 ************************************ 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.780 13:16:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:30.780 [2024-11-17 13:16:19.994313] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.039 00:06:31.039 real 0m0.075s 00:06:31.039 user 0m0.051s 00:06:31.039 sys 0m0.023s 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:31.039 ************************************ 00:06:31.039 END TEST dd_double_output 00:06:31.039 ************************************ 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:31.039 ************************************ 00:06:31.039 START TEST dd_no_input 00:06:31.039 ************************************ 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:31.039 [2024-11-17 13:16:20.117678] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.039 00:06:31.039 real 0m0.073s 00:06:31.039 user 0m0.042s 00:06:31.039 sys 0m0.030s 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:31.039 ************************************ 00:06:31.039 END TEST dd_no_input 00:06:31.039 ************************************ 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:31.039 ************************************ 00:06:31.039 START TEST dd_no_output 00:06:31.039 ************************************ 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.039 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.039 [2024-11-17 13:16:20.243307] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:31.298 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.299 00:06:31.299 real 0m0.075s 00:06:31.299 user 0m0.042s 00:06:31.299 sys 0m0.033s 00:06:31.299 ************************************ 00:06:31.299 END TEST dd_no_output 00:06:31.299 ************************************ 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:31.299 ************************************ 00:06:31.299 START TEST dd_wrong_blocksize 00:06:31.299 ************************************ 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:31.299 [2024-11-17 13:16:20.374406] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.299 00:06:31.299 real 0m0.077s 00:06:31.299 user 0m0.050s 00:06:31.299 sys 0m0.027s 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:31.299 ************************************ 00:06:31.299 END TEST dd_wrong_blocksize 00:06:31.299 ************************************ 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:31.299 ************************************ 00:06:31.299 START TEST dd_smaller_blocksize 00:06:31.299 ************************************ 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:31.299 13:16:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:31.299 [2024-11-17 13:16:20.506123] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:31.299 [2024-11-17 13:16:20.506201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61753 ] 00:06:31.558 [2024-11-17 13:16:20.659006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.558 [2024-11-17 13:16:20.724181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.817 [2024-11-17 13:16:20.786057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.077 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:32.336 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:32.336 [2024-11-17 13:16:21.479584] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:32.336 [2024-11-17 13:16:21.479782] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.595 [2024-11-17 13:16:21.611064] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.595 00:06:32.595 real 0m1.246s 00:06:32.595 user 0m0.455s 00:06:32.595 sys 0m0.681s 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:32.595 ************************************ 00:06:32.595 END TEST dd_smaller_blocksize 00:06:32.595 ************************************ 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.595 ************************************ 00:06:32.595 START TEST dd_invalid_count 00:06:32.595 ************************************ 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.595 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.596 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.596 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.596 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.596 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.596 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:32.596 [2024-11-17 13:16:21.809638] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.855 00:06:32.855 real 0m0.082s 00:06:32.855 user 0m0.048s 00:06:32.855 sys 0m0.033s 00:06:32.855 ************************************ 00:06:32.855 END TEST dd_invalid_count 00:06:32.855 ************************************ 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.855 ************************************ 00:06:32.855 START TEST dd_invalid_oflag 00:06:32.855 ************************************ 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:32.855 [2024-11-17 13:16:21.957146] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:32.855 00:06:32.855 real 0m0.100s 00:06:32.855 user 0m0.066s 00:06:32.855 sys 0m0.032s 00:06:32.855 ************************************ 00:06:32.855 END TEST dd_invalid_oflag 00:06:32.855 ************************************ 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.855 13:16:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:32.855 ************************************ 00:06:32.855 START TEST dd_invalid_iflag 00:06:32.855 ************************************ 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.855 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:33.114 [2024-11-17 13:16:22.107081] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:33.114 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:33.114 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.114 ************************************ 00:06:33.114 END TEST dd_invalid_iflag 00:06:33.114 ************************************ 00:06:33.114 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.114 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.114 00:06:33.114 real 0m0.081s 00:06:33.114 user 0m0.049s 00:06:33.115 sys 0m0.030s 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:33.115 ************************************ 00:06:33.115 START TEST dd_unknown_flag 00:06:33.115 ************************************ 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.115 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:33.115 [2024-11-17 13:16:22.242787] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:33.115 [2024-11-17 13:16:22.242911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:06:33.373 [2024-11-17 13:16:22.396331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.373 [2024-11-17 13:16:22.459556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.373 [2024-11-17 13:16:22.519677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.373 [2024-11-17 13:16:22.559051] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:33.374 [2024-11-17 13:16:22.559136] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.374 [2024-11-17 13:16:22.559212] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:33.374 [2024-11-17 13:16:22.559229] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.374 [2024-11-17 13:16:22.559514] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:33.374 [2024-11-17 13:16:22.559534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.374 [2024-11-17 13:16:22.559589] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:33.374 [2024-11-17 13:16:22.559612] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:33.633 [2024-11-17 13:16:22.688028] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.633 00:06:33.633 real 0m0.590s 00:06:33.633 user 0m0.340s 00:06:33.633 sys 0m0.152s 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.633 ************************************ 00:06:33.633 END TEST dd_unknown_flag 00:06:33.633 ************************************ 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:33.633 ************************************ 00:06:33.633 START TEST dd_invalid_json 00:06:33.633 ************************************ 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.633 13:16:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:33.892 [2024-11-17 13:16:22.895326] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:33.892 [2024-11-17 13:16:22.895604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:06:33.892 [2024-11-17 13:16:23.042914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.892 [2024-11-17 13:16:23.101035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.892 [2024-11-17 13:16:23.101120] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:33.892 [2024-11-17 13:16:23.101137] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.892 [2024-11-17 13:16:23.101146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.892 [2024-11-17 13:16:23.101181] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:34.151 ************************************ 00:06:34.151 END TEST dd_invalid_json 00:06:34.151 ************************************ 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.151 00:06:34.151 real 0m0.332s 00:06:34.151 user 0m0.159s 00:06:34.151 sys 0m0.070s 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.151 ************************************ 00:06:34.151 START TEST dd_invalid_seek 00:06:34.151 ************************************ 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:34.151 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.152 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:34.152 { 00:06:34.152 "subsystems": [ 00:06:34.152 { 00:06:34.152 "subsystem": "bdev", 00:06:34.152 "config": [ 00:06:34.152 { 00:06:34.152 "params": { 00:06:34.152 "block_size": 512, 00:06:34.152 "num_blocks": 512, 00:06:34.152 "name": "malloc0" 00:06:34.152 }, 00:06:34.152 "method": "bdev_malloc_create" 00:06:34.152 }, 00:06:34.152 { 00:06:34.152 "params": { 00:06:34.152 "block_size": 512, 00:06:34.152 "num_blocks": 512, 00:06:34.152 "name": "malloc1" 00:06:34.152 }, 00:06:34.152 "method": "bdev_malloc_create" 00:06:34.152 }, 00:06:34.152 { 00:06:34.152 "method": "bdev_wait_for_examine" 00:06:34.152 } 00:06:34.152 ] 00:06:34.152 } 00:06:34.152 ] 00:06:34.152 } 00:06:34.152 [2024-11-17 13:16:23.267824] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:34.152 [2024-11-17 13:16:23.267984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61914 ] 00:06:34.411 [2024-11-17 13:16:23.417379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.411 [2024-11-17 13:16:23.482152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.411 [2024-11-17 13:16:23.543067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.411 [2024-11-17 13:16:23.606468] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:34.411 [2024-11-17 13:16:23.606544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.670 [2024-11-17 13:16:23.733754] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.670 00:06:34.670 real 0m0.594s 00:06:34.670 user 0m0.372s 00:06:34.670 sys 0m0.175s 00:06:34.670 ************************************ 00:06:34.670 END TEST dd_invalid_seek 00:06:34.670 ************************************ 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:34.670 ************************************ 00:06:34.670 START TEST dd_invalid_skip 00:06:34.670 ************************************ 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.670 13:16:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:34.929 [2024-11-17 13:16:23.919424] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:34.929 [2024-11-17 13:16:23.919539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61948 ] 00:06:34.929 { 00:06:34.929 "subsystems": [ 00:06:34.929 { 00:06:34.929 "subsystem": "bdev", 00:06:34.929 "config": [ 00:06:34.929 { 00:06:34.929 "params": { 00:06:34.929 "block_size": 512, 00:06:34.929 "num_blocks": 512, 00:06:34.929 "name": "malloc0" 00:06:34.929 }, 00:06:34.929 "method": "bdev_malloc_create" 00:06:34.929 }, 00:06:34.929 { 00:06:34.929 "params": { 00:06:34.929 "block_size": 512, 00:06:34.929 "num_blocks": 512, 00:06:34.929 "name": "malloc1" 00:06:34.929 }, 00:06:34.929 "method": "bdev_malloc_create" 00:06:34.929 }, 00:06:34.929 { 00:06:34.929 "method": "bdev_wait_for_examine" 00:06:34.929 } 00:06:34.929 ] 00:06:34.929 } 00:06:34.929 ] 00:06:34.929 } 00:06:34.929 [2024-11-17 13:16:24.072602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.929 [2024-11-17 13:16:24.146607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.189 [2024-11-17 13:16:24.209226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.189 [2024-11-17 13:16:24.277182] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:35.189 [2024-11-17 13:16:24.277253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.189 [2024-11-17 13:16:24.407978] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.448 00:06:35.448 real 0m0.619s 00:06:35.448 user 0m0.389s 00:06:35.448 sys 0m0.189s 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.448 ************************************ 00:06:35.448 END TEST dd_invalid_skip 00:06:35.448 ************************************ 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:35.448 ************************************ 00:06:35.448 START TEST dd_invalid_input_count 00:06:35.448 ************************************ 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.448 13:16:24 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:35.448 [2024-11-17 13:16:24.598962] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:35.448 [2024-11-17 13:16:24.599289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:06:35.448 { 00:06:35.448 "subsystems": [ 00:06:35.448 { 00:06:35.448 "subsystem": "bdev", 00:06:35.448 "config": [ 00:06:35.448 { 00:06:35.448 "params": { 00:06:35.448 "block_size": 512, 00:06:35.448 "num_blocks": 512, 00:06:35.449 "name": "malloc0" 00:06:35.449 }, 00:06:35.449 "method": "bdev_malloc_create" 00:06:35.449 }, 00:06:35.449 { 00:06:35.449 "params": { 00:06:35.449 "block_size": 512, 00:06:35.449 "num_blocks": 512, 00:06:35.449 "name": "malloc1" 00:06:35.449 }, 00:06:35.449 "method": "bdev_malloc_create" 00:06:35.449 }, 00:06:35.449 { 00:06:35.449 "method": "bdev_wait_for_examine" 00:06:35.449 } 00:06:35.449 ] 00:06:35.449 } 00:06:35.449 ] 00:06:35.449 } 00:06:35.708 [2024-11-17 13:16:24.755426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.708 [2024-11-17 13:16:24.823577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.708 [2024-11-17 13:16:24.886538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.967 [2024-11-17 13:16:24.953595] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:35.967 [2024-11-17 13:16:24.953677] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.967 [2024-11-17 13:16:25.077493] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:35.967 ************************************ 00:06:35.967 END TEST dd_invalid_input_count 00:06:35.967 ************************************ 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.967 00:06:35.967 real 0m0.612s 00:06:35.967 user 0m0.395s 00:06:35.967 sys 0m0.177s 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.967 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:36.226 ************************************ 00:06:36.226 START TEST dd_invalid_output_count 00:06:36.226 ************************************ 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.226 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:36.226 { 00:06:36.226 "subsystems": [ 00:06:36.226 { 00:06:36.226 "subsystem": "bdev", 00:06:36.226 "config": [ 00:06:36.226 { 00:06:36.226 "params": { 00:06:36.226 "block_size": 512, 00:06:36.226 "num_blocks": 512, 00:06:36.226 "name": "malloc0" 00:06:36.226 }, 00:06:36.226 "method": "bdev_malloc_create" 00:06:36.226 }, 00:06:36.226 { 00:06:36.226 "method": "bdev_wait_for_examine" 00:06:36.226 } 00:06:36.226 ] 00:06:36.226 } 00:06:36.226 ] 00:06:36.226 } 00:06:36.226 [2024-11-17 13:16:25.254281] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:36.226 [2024-11-17 13:16:25.254382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62020 ] 00:06:36.226 [2024-11-17 13:16:25.407065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.485 [2024-11-17 13:16:25.473807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.485 [2024-11-17 13:16:25.532409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.486 [2024-11-17 13:16:25.588424] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:36.486 [2024-11-17 13:16:25.588771] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.745 [2024-11-17 13:16:25.710799] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:36.745 ************************************ 00:06:36.745 END TEST dd_invalid_output_count 00:06:36.745 ************************************ 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.745 00:06:36.745 real 0m0.593s 00:06:36.745 user 0m0.380s 00:06:36.745 sys 0m0.165s 00:06:36.745 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:36.746 ************************************ 00:06:36.746 START TEST dd_bs_not_multiple 00:06:36.746 ************************************ 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.746 13:16:25 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:36.746 [2024-11-17 13:16:25.901320] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:36.746 [2024-11-17 13:16:25.901609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:06:36.746 { 00:06:36.746 "subsystems": [ 00:06:36.746 { 00:06:36.746 "subsystem": "bdev", 00:06:36.746 "config": [ 00:06:36.746 { 00:06:36.746 "params": { 00:06:36.746 "block_size": 512, 00:06:36.746 "num_blocks": 512, 00:06:36.746 "name": "malloc0" 00:06:36.746 }, 00:06:36.746 "method": "bdev_malloc_create" 00:06:36.746 }, 00:06:36.746 { 00:06:36.746 "params": { 00:06:36.746 "block_size": 512, 00:06:36.746 "num_blocks": 512, 00:06:36.746 "name": "malloc1" 00:06:36.746 }, 00:06:36.746 "method": "bdev_malloc_create" 00:06:36.746 }, 00:06:36.746 { 00:06:36.746 "method": "bdev_wait_for_examine" 00:06:36.746 } 00:06:36.746 ] 00:06:36.746 } 00:06:36.746 ] 00:06:36.746 } 00:06:37.006 [2024-11-17 13:16:26.050768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.006 [2024-11-17 13:16:26.106402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.006 [2024-11-17 13:16:26.161900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.006 [2024-11-17 13:16:26.224944] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:37.006 [2024-11-17 13:16:26.225000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.265 [2024-11-17 13:16:26.342712] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.265 00:06:37.265 real 0m0.570s 00:06:37.265 user 0m0.369s 00:06:37.265 sys 0m0.162s 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.265 ************************************ 00:06:37.265 END TEST dd_bs_not_multiple 00:06:37.265 ************************************ 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:37.265 ************************************ 00:06:37.265 END TEST spdk_dd_negative 00:06:37.265 ************************************ 00:06:37.265 00:06:37.265 real 0m6.969s 00:06:37.265 user 0m3.701s 00:06:37.265 sys 0m2.651s 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.265 13:16:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 ************************************ 00:06:37.523 END TEST spdk_dd 00:06:37.523 ************************************ 00:06:37.523 00:06:37.523 real 1m17.173s 00:06:37.523 user 0m48.735s 00:06:37.523 sys 0m35.113s 00:06:37.523 13:16:26 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.523 13:16:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 13:16:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:37.523 13:16:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.523 13:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 13:16:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:37.523 13:16:26 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:37.523 13:16:26 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.523 13:16:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.523 13:16:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.523 13:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 ************************************ 00:06:37.523 START TEST nvmf_tcp 00:06:37.523 ************************************ 00:06:37.523 13:16:26 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.523 * Looking for test storage... 00:06:37.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:37.523 13:16:26 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.523 13:16:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.523 13:16:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.783 13:16:26 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.783 --rc genhtml_branch_coverage=1 00:06:37.783 --rc genhtml_function_coverage=1 00:06:37.783 --rc genhtml_legend=1 00:06:37.783 --rc geninfo_all_blocks=1 00:06:37.783 --rc geninfo_unexecuted_blocks=1 00:06:37.783 00:06:37.783 ' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.783 --rc genhtml_branch_coverage=1 00:06:37.783 --rc genhtml_function_coverage=1 00:06:37.783 --rc genhtml_legend=1 00:06:37.783 --rc geninfo_all_blocks=1 00:06:37.783 --rc geninfo_unexecuted_blocks=1 00:06:37.783 00:06:37.783 ' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.783 --rc genhtml_branch_coverage=1 00:06:37.783 --rc genhtml_function_coverage=1 00:06:37.783 --rc genhtml_legend=1 00:06:37.783 --rc geninfo_all_blocks=1 00:06:37.783 --rc geninfo_unexecuted_blocks=1 00:06:37.783 00:06:37.783 ' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.783 --rc genhtml_branch_coverage=1 00:06:37.783 --rc genhtml_function_coverage=1 00:06:37.783 --rc genhtml_legend=1 00:06:37.783 --rc geninfo_all_blocks=1 00:06:37.783 --rc geninfo_unexecuted_blocks=1 00:06:37.783 00:06:37.783 ' 00:06:37.783 13:16:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:37.783 13:16:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.783 13:16:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.783 13:16:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 ************************************ 00:06:37.783 START TEST nvmf_target_core 00:06:37.783 ************************************ 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.783 * Looking for test storage... 00:06:37.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.783 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.784 --rc genhtml_branch_coverage=1 00:06:37.784 --rc genhtml_function_coverage=1 00:06:37.784 --rc genhtml_legend=1 00:06:37.784 --rc geninfo_all_blocks=1 00:06:37.784 --rc geninfo_unexecuted_blocks=1 00:06:37.784 00:06:37.784 ' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.784 --rc genhtml_branch_coverage=1 00:06:37.784 --rc genhtml_function_coverage=1 00:06:37.784 --rc genhtml_legend=1 00:06:37.784 --rc geninfo_all_blocks=1 00:06:37.784 --rc geninfo_unexecuted_blocks=1 00:06:37.784 00:06:37.784 ' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.784 --rc genhtml_branch_coverage=1 00:06:37.784 --rc genhtml_function_coverage=1 00:06:37.784 --rc genhtml_legend=1 00:06:37.784 --rc geninfo_all_blocks=1 00:06:37.784 --rc geninfo_unexecuted_blocks=1 00:06:37.784 00:06:37.784 ' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.784 --rc genhtml_branch_coverage=1 00:06:37.784 --rc genhtml_function_coverage=1 00:06:37.784 --rc genhtml_legend=1 00:06:37.784 --rc geninfo_all_blocks=1 00:06:37.784 --rc geninfo_unexecuted_blocks=1 00:06:37.784 00:06:37.784 ' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.784 13:16:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.044 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:38.044 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.045 ************************************ 00:06:38.045 START TEST nvmf_host_management 00:06:38.045 ************************************ 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.045 * Looking for test storage... 00:06:38.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.045 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.046 --rc genhtml_branch_coverage=1 00:06:38.046 --rc genhtml_function_coverage=1 00:06:38.046 --rc genhtml_legend=1 00:06:38.046 --rc geninfo_all_blocks=1 00:06:38.046 --rc geninfo_unexecuted_blocks=1 00:06:38.046 00:06:38.046 ' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.046 --rc genhtml_branch_coverage=1 00:06:38.046 --rc genhtml_function_coverage=1 00:06:38.046 --rc genhtml_legend=1 00:06:38.046 --rc geninfo_all_blocks=1 00:06:38.046 --rc geninfo_unexecuted_blocks=1 00:06:38.046 00:06:38.046 ' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.046 --rc genhtml_branch_coverage=1 00:06:38.046 --rc genhtml_function_coverage=1 00:06:38.046 --rc genhtml_legend=1 00:06:38.046 --rc geninfo_all_blocks=1 00:06:38.046 --rc geninfo_unexecuted_blocks=1 00:06:38.046 00:06:38.046 ' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.046 --rc genhtml_branch_coverage=1 00:06:38.046 --rc genhtml_function_coverage=1 00:06:38.046 --rc genhtml_legend=1 00:06:38.046 --rc geninfo_all_blocks=1 00:06:38.046 --rc geninfo_unexecuted_blocks=1 00:06:38.046 00:06:38.046 ' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.046 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:38.046 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:38.047 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:38.306 Cannot find device "nvmf_init_br" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:38.306 Cannot find device "nvmf_init_br2" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:38.306 Cannot find device "nvmf_tgt_br" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.306 Cannot find device "nvmf_tgt_br2" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:38.306 Cannot find device "nvmf_init_br" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:38.306 Cannot find device "nvmf_init_br2" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:38.306 Cannot find device "nvmf_tgt_br" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:38.306 Cannot find device "nvmf_tgt_br2" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:38.306 Cannot find device "nvmf_br" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:38.306 Cannot find device "nvmf_init_if" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:38.306 Cannot find device "nvmf_init_if2" 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:38.306 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:38.307 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:38.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:38.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:06:38.566 00:06:38.566 --- 10.0.0.3 ping statistics --- 00:06:38.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.566 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:38.566 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:38.566 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:06:38.566 00:06:38.566 --- 10.0.0.4 ping statistics --- 00:06:38.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.566 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:38.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:38.566 00:06:38.566 --- 10.0.0.1 ping statistics --- 00:06:38.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.566 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:38.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:06:38.566 00:06:38.566 --- 10.0.0.2 ping statistics --- 00:06:38.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.566 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62395 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62395 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62395 ']' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.566 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.825 [2024-11-17 13:16:27.840839] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:38.825 [2024-11-17 13:16:27.840942] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.825 [2024-11-17 13:16:27.998214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.098 [2024-11-17 13:16:28.067240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.098 [2024-11-17 13:16:28.067313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.098 [2024-11-17 13:16:28.067327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.098 [2024-11-17 13:16:28.067337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.098 [2024-11-17 13:16:28.067346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.098 [2024-11-17 13:16:28.068848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.098 [2024-11-17 13:16:28.069013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.098 [2024-11-17 13:16:28.069161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.099 [2024-11-17 13:16:28.069164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.099 [2024-11-17 13:16:28.128117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 [2024-11-17 13:16:28.247573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.099 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 Malloc0 00:06:39.358 [2024-11-17 13:16:28.323909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62441 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62441 /var/tmp/bdevperf.sock 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62441 ']' 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:39.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:39.358 { 00:06:39.358 "params": { 00:06:39.358 "name": "Nvme$subsystem", 00:06:39.358 "trtype": "$TEST_TRANSPORT", 00:06:39.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.358 "adrfam": "ipv4", 00:06:39.358 "trsvcid": "$NVMF_PORT", 00:06:39.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.358 "hdgst": ${hdgst:-false}, 00:06:39.358 "ddgst": ${ddgst:-false} 00:06:39.358 }, 00:06:39.358 "method": "bdev_nvme_attach_controller" 00:06:39.358 } 00:06:39.358 EOF 00:06:39.358 )") 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:39.358 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:39.358 "params": { 00:06:39.358 "name": "Nvme0", 00:06:39.358 "trtype": "tcp", 00:06:39.358 "traddr": "10.0.0.3", 00:06:39.358 "adrfam": "ipv4", 00:06:39.358 "trsvcid": "4420", 00:06:39.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.358 "hdgst": false, 00:06:39.358 "ddgst": false 00:06:39.358 }, 00:06:39.358 "method": "bdev_nvme_attach_controller" 00:06:39.358 }' 00:06:39.358 [2024-11-17 13:16:28.429615] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:39.358 [2024-11-17 13:16:28.429711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62441 ] 00:06:39.617 [2024-11-17 13:16:28.579603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.617 [2024-11-17 13:16:28.637490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.617 [2024-11-17 13:16:28.713268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.876 Running I/O for 10 seconds... 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:39.876 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:39.877 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:40.135 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.136 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.136 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:40.136 00:06:40.136 Latency(us) 00:06:40.136 [2024-11-17T13:16:29.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.136 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:40.136 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:40.136 Verification LBA range: start 0x0 length 0x400 00:06:40.136 Nvme0n1 : 0.46 1404.85 87.80 140.49 0.00 39800.07 2085.24 43372.92 00:06:40.136 [2024-11-17T13:16:29.360Z] =================================================================================================================== 00:06:40.136 [2024-11-17T13:16:29.360Z] Total : 1404.85 87.80 140.49 0.00 39800.07 2085.24 43372.92 00:06:40.136 [2024-11-17 13:16:29.305349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.305987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.305996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.136 [2024-11-17 13:16:29.306473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.136 [2024-11-17 13:16:29.306482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.137 [2024-11-17 13:16:29.306694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226c2d0 is same with the state(6) to be set 00:06:40.137 [2024-11-17 13:16:29.306932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.137 [2024-11-17 13:16:29.306949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.137 [2024-11-17 13:16:29.306967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.137 [2024-11-17 13:16:29.306983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.306991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.137 [2024-11-17 13:16:29.306999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.137 [2024-11-17 13:16:29.307006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2271ce0 is same with the state(6) to be set 00:06:40.137 [2024-11-17 13:16:29.307931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.137 [2024-11-17 13:16:29.309623] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.137 [2024-11-17 13:16:29.309653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2271ce0 (9): Bad file descriptor 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.137 [2024-11-17 13:16:29.316333] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.137 13:16:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62441 00:06:41.510 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62441) - No such process 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:41.510 { 00:06:41.510 "params": { 00:06:41.510 "name": "Nvme$subsystem", 00:06:41.510 "trtype": "$TEST_TRANSPORT", 00:06:41.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.510 "adrfam": "ipv4", 00:06:41.510 "trsvcid": "$NVMF_PORT", 00:06:41.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.510 "hdgst": ${hdgst:-false}, 00:06:41.510 "ddgst": ${ddgst:-false} 00:06:41.510 }, 00:06:41.510 "method": "bdev_nvme_attach_controller" 00:06:41.510 } 00:06:41.510 EOF 00:06:41.510 )") 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:41.510 13:16:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:41.510 "params": { 00:06:41.510 "name": "Nvme0", 00:06:41.510 "trtype": "tcp", 00:06:41.510 "traddr": "10.0.0.3", 00:06:41.510 "adrfam": "ipv4", 00:06:41.510 "trsvcid": "4420", 00:06:41.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.510 "hdgst": false, 00:06:41.510 "ddgst": false 00:06:41.510 }, 00:06:41.510 "method": "bdev_nvme_attach_controller" 00:06:41.510 }' 00:06:41.510 [2024-11-17 13:16:30.385416] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:41.510 [2024-11-17 13:16:30.385519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62480 ] 00:06:41.510 [2024-11-17 13:16:30.534413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.510 [2024-11-17 13:16:30.602063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.510 [2024-11-17 13:16:30.685878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.768 Running I/O for 1 seconds... 00:06:42.705 1606.00 IOPS, 100.38 MiB/s 00:06:42.705 Latency(us) 00:06:42.705 [2024-11-17T13:16:31.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.705 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:42.705 Verification LBA range: start 0x0 length 0x400 00:06:42.705 Nvme0n1 : 1.04 1664.62 104.04 0.00 0.00 37729.25 4200.26 37891.72 00:06:42.705 [2024-11-17T13:16:31.929Z] =================================================================================================================== 00:06:42.705 [2024-11-17T13:16:31.929Z] Total : 1664.62 104.04 0.00 0.00 37729.25 4200.26 37891.72 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.964 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.223 rmmod nvme_tcp 00:06:43.223 rmmod nvme_fabrics 00:06:43.223 rmmod nvme_keyring 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62395 ']' 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62395 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62395 ']' 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62395 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62395 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:43.223 killing process with pid 62395 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62395' 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62395 00:06:43.223 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62395 00:06:43.483 [2024-11-17 13:16:32.456562] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.483 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.744 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:43.745 00:06:43.745 real 0m5.707s 00:06:43.745 user 0m20.161s 00:06:43.745 sys 0m1.661s 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.745 ************************************ 00:06:43.745 END TEST nvmf_host_management 00:06:43.745 ************************************ 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.745 ************************************ 00:06:43.745 START TEST nvmf_lvol 00:06:43.745 ************************************ 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.745 * Looking for test storage... 00:06:43.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.745 --rc genhtml_branch_coverage=1 00:06:43.745 --rc genhtml_function_coverage=1 00:06:43.745 --rc genhtml_legend=1 00:06:43.745 --rc geninfo_all_blocks=1 00:06:43.745 --rc geninfo_unexecuted_blocks=1 00:06:43.745 00:06:43.745 ' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.745 --rc genhtml_branch_coverage=1 00:06:43.745 --rc genhtml_function_coverage=1 00:06:43.745 --rc genhtml_legend=1 00:06:43.745 --rc geninfo_all_blocks=1 00:06:43.745 --rc geninfo_unexecuted_blocks=1 00:06:43.745 00:06:43.745 ' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.745 --rc genhtml_branch_coverage=1 00:06:43.745 --rc genhtml_function_coverage=1 00:06:43.745 --rc genhtml_legend=1 00:06:43.745 --rc geninfo_all_blocks=1 00:06:43.745 --rc geninfo_unexecuted_blocks=1 00:06:43.745 00:06:43.745 ' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.745 --rc genhtml_branch_coverage=1 00:06:43.745 --rc genhtml_function_coverage=1 00:06:43.745 --rc genhtml_legend=1 00:06:43.745 --rc geninfo_all_blocks=1 00:06:43.745 --rc geninfo_unexecuted_blocks=1 00:06:43.745 00:06:43.745 ' 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.745 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.746 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.746 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:44.066 Cannot find device "nvmf_init_br" 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:44.066 Cannot find device "nvmf_init_br2" 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:44.066 Cannot find device "nvmf_tgt_br" 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:44.066 13:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:44.066 Cannot find device "nvmf_tgt_br2" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:44.066 Cannot find device "nvmf_init_br" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:44.066 Cannot find device "nvmf_init_br2" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:44.066 Cannot find device "nvmf_tgt_br" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:44.066 Cannot find device "nvmf_tgt_br2" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:44.066 Cannot find device "nvmf_br" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:44.066 Cannot find device "nvmf_init_if" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:44.066 Cannot find device "nvmf_init_if2" 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:44.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:44.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:44.066 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:44.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:44.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:06:44.325 00:06:44.325 --- 10.0.0.3 ping statistics --- 00:06:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.325 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:44.325 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:44.325 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:06:44.325 00:06:44.325 --- 10.0.0.4 ping statistics --- 00:06:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.325 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:44.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:06:44.325 00:06:44.325 --- 10.0.0.1 ping statistics --- 00:06:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.325 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:44.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:06:44.325 00:06:44.325 --- 10.0.0.2 ping statistics --- 00:06:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.325 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62748 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62748 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62748 ']' 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.325 13:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.325 [2024-11-17 13:16:33.399275] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:44.325 [2024-11-17 13:16:33.399357] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.325 [2024-11-17 13:16:33.542434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.583 [2024-11-17 13:16:33.598331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.583 [2024-11-17 13:16:33.598385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.583 [2024-11-17 13:16:33.598395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.584 [2024-11-17 13:16:33.598402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.584 [2024-11-17 13:16:33.598408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.584 [2024-11-17 13:16:33.599570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.584 [2024-11-17 13:16:33.599675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.584 [2024-11-17 13:16:33.599695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.584 [2024-11-17 13:16:33.654149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.151 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.409 [2024-11-17 13:16:34.600448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.409 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.977 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:45.977 13:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:46.235 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:46.235 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:46.235 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:46.495 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a681f596-02d7-4d58-91cb-a6c9b49df9ed 00:06:46.495 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a681f596-02d7-4d58-91cb-a6c9b49df9ed lvol 20 00:06:46.754 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=094ac760-1bf8-4ffa-8f0a-340836da5e6f 00:06:46.754 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.013 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 094ac760-1bf8-4ffa-8f0a-340836da5e6f 00:06:47.272 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:47.531 [2024-11-17 13:16:36.614502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:47.531 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:47.790 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62823 00:06:47.790 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:47.790 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.723 13:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 094ac760-1bf8-4ffa-8f0a-340836da5e6f MY_SNAPSHOT 00:06:49.290 13:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=371c4a8f-5929-4dc2-87e7-506bb351aca8 00:06:49.290 13:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 094ac760-1bf8-4ffa-8f0a-340836da5e6f 30 00:06:49.549 13:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 371c4a8f-5929-4dc2-87e7-506bb351aca8 MY_CLONE 00:06:49.808 13:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=272161ff-924d-4ab9-9595-0a195df5b6e4 00:06:49.808 13:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 272161ff-924d-4ab9-9595-0a195df5b6e4 00:06:50.375 13:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62823 00:06:58.489 Initializing NVMe Controllers 00:06:58.489 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:58.489 Controller IO queue size 128, less than required. 00:06:58.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.489 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:58.489 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:58.489 Initialization complete. Launching workers. 00:06:58.489 ======================================================== 00:06:58.489 Latency(us) 00:06:58.489 Device Information : IOPS MiB/s Average min max 00:06:58.489 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10472.40 40.91 12223.51 2122.00 46555.37 00:06:58.489 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10467.50 40.89 12229.59 2209.78 89000.93 00:06:58.489 ======================================================== 00:06:58.489 Total : 20939.90 81.80 12226.55 2122.00 89000.93 00:06:58.489 00:06:58.489 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.489 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 094ac760-1bf8-4ffa-8f0a-340836da5e6f 00:06:58.747 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a681f596-02d7-4d58-91cb-a6c9b49df9ed 00:06:59.005 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:59.006 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:59.006 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:59.006 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.006 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.006 rmmod nvme_tcp 00:06:59.006 rmmod nvme_fabrics 00:06:59.006 rmmod nvme_keyring 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62748 ']' 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62748 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62748 ']' 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62748 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62748 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62748' 00:06:59.006 killing process with pid 62748 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62748 00:06:59.006 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62748 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.264 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:59.265 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:59.523 00:06:59.523 real 0m15.838s 00:06:59.523 user 1m5.192s 00:06:59.523 sys 0m4.217s 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.523 ************************************ 00:06:59.523 END TEST nvmf_lvol 00:06:59.523 ************************************ 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.523 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.524 13:16:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.524 ************************************ 00:06:59.524 START TEST nvmf_lvs_grow 00:06:59.524 ************************************ 00:06:59.524 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:59.524 * Looking for test storage... 00:06:59.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:59.524 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.524 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.524 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.783 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.784 --rc genhtml_branch_coverage=1 00:06:59.784 --rc genhtml_function_coverage=1 00:06:59.784 --rc genhtml_legend=1 00:06:59.784 --rc geninfo_all_blocks=1 00:06:59.784 --rc geninfo_unexecuted_blocks=1 00:06:59.784 00:06:59.784 ' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.784 --rc genhtml_branch_coverage=1 00:06:59.784 --rc genhtml_function_coverage=1 00:06:59.784 --rc genhtml_legend=1 00:06:59.784 --rc geninfo_all_blocks=1 00:06:59.784 --rc geninfo_unexecuted_blocks=1 00:06:59.784 00:06:59.784 ' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.784 --rc genhtml_branch_coverage=1 00:06:59.784 --rc genhtml_function_coverage=1 00:06:59.784 --rc genhtml_legend=1 00:06:59.784 --rc geninfo_all_blocks=1 00:06:59.784 --rc geninfo_unexecuted_blocks=1 00:06:59.784 00:06:59.784 ' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.784 --rc genhtml_branch_coverage=1 00:06:59.784 --rc genhtml_function_coverage=1 00:06:59.784 --rc genhtml_legend=1 00:06:59.784 --rc geninfo_all_blocks=1 00:06:59.784 --rc geninfo_unexecuted_blocks=1 00:06:59.784 00:06:59.784 ' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:59.784 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:59.785 Cannot find device "nvmf_init_br" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:59.785 Cannot find device "nvmf_init_br2" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:59.785 Cannot find device "nvmf_tgt_br" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.785 Cannot find device "nvmf_tgt_br2" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:59.785 Cannot find device "nvmf_init_br" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:59.785 Cannot find device "nvmf_init_br2" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:59.785 Cannot find device "nvmf_tgt_br" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:59.785 Cannot find device "nvmf_tgt_br2" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:59.785 Cannot find device "nvmf_br" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:59.785 Cannot find device "nvmf_init_if" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:59.785 Cannot find device "nvmf_init_if2" 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:59.785 13:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:00.044 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:00.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:07:00.045 00:07:00.045 --- 10.0.0.3 ping statistics --- 00:07:00.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.045 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:00.045 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:00.045 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:07:00.045 00:07:00.045 --- 10.0.0.4 ping statistics --- 00:07:00.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.045 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:00.045 00:07:00.045 --- 10.0.0.1 ping statistics --- 00:07:00.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.045 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:00.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:07:00.045 00:07:00.045 --- 10.0.0.2 ping statistics --- 00:07:00.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.045 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63205 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63205 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63205 ']' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.045 13:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:00.303 [2024-11-17 13:16:49.319079] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:00.303 [2024-11-17 13:16:49.319193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.303 [2024-11-17 13:16:49.460975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.303 [2024-11-17 13:16:49.512550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.303 [2024-11-17 13:16:49.512607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.303 [2024-11-17 13:16:49.512633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.303 [2024-11-17 13:16:49.512640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.303 [2024-11-17 13:16:49.512647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.303 [2024-11-17 13:16:49.513032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.562 [2024-11-17 13:16:49.563754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.129 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:01.388 [2024-11-17 13:16:50.601657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.646 ************************************ 00:07:01.646 START TEST lvs_grow_clean 00:07:01.646 ************************************ 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:01.646 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.905 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:01.905 13:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.164 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:02.164 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:02.164 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:02.423 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:02.424 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:02.424 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 lvol 150 00:07:02.683 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f0c60b13-39e5-4e09-9c9d-7437847055a9 00:07:02.683 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:02.683 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:02.683 [2024-11-17 13:16:51.901435] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:02.683 [2024-11-17 13:16:51.901524] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:02.942 true 00:07:02.942 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:02.942 13:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:03.201 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:03.201 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.201 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f0c60b13-39e5-4e09-9c9d-7437847055a9 00:07:03.460 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:03.719 [2024-11-17 13:16:52.877929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:03.719 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63287 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63287 /var/tmp/bdevperf.sock 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63287 ']' 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:03.977 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:03.978 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:03.978 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.978 13:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:04.236 [2024-11-17 13:16:53.231412] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:04.236 [2024-11-17 13:16:53.231525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63287 ] 00:07:04.236 [2024-11-17 13:16:53.388965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.236 [2024-11-17 13:16:53.446708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.496 [2024-11-17 13:16:53.504109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.064 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.064 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:05.064 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:05.323 Nvme0n1 00:07:05.323 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:05.582 [ 00:07:05.582 { 00:07:05.582 "name": "Nvme0n1", 00:07:05.582 "aliases": [ 00:07:05.582 "f0c60b13-39e5-4e09-9c9d-7437847055a9" 00:07:05.582 ], 00:07:05.582 "product_name": "NVMe disk", 00:07:05.582 "block_size": 4096, 00:07:05.582 "num_blocks": 38912, 00:07:05.582 "uuid": "f0c60b13-39e5-4e09-9c9d-7437847055a9", 00:07:05.582 "numa_id": -1, 00:07:05.582 "assigned_rate_limits": { 00:07:05.582 "rw_ios_per_sec": 0, 00:07:05.582 "rw_mbytes_per_sec": 0, 00:07:05.582 "r_mbytes_per_sec": 0, 00:07:05.582 "w_mbytes_per_sec": 0 00:07:05.582 }, 00:07:05.582 "claimed": false, 00:07:05.582 "zoned": false, 00:07:05.582 "supported_io_types": { 00:07:05.582 "read": true, 00:07:05.582 "write": true, 00:07:05.582 "unmap": true, 00:07:05.582 "flush": true, 00:07:05.582 "reset": true, 00:07:05.582 "nvme_admin": true, 00:07:05.582 "nvme_io": true, 00:07:05.582 "nvme_io_md": false, 00:07:05.582 "write_zeroes": true, 00:07:05.582 "zcopy": false, 00:07:05.582 "get_zone_info": false, 00:07:05.582 "zone_management": false, 00:07:05.582 "zone_append": false, 00:07:05.582 "compare": true, 00:07:05.582 "compare_and_write": true, 00:07:05.582 "abort": true, 00:07:05.582 "seek_hole": false, 00:07:05.582 "seek_data": false, 00:07:05.582 "copy": true, 00:07:05.582 "nvme_iov_md": false 00:07:05.582 }, 00:07:05.582 "memory_domains": [ 00:07:05.582 { 00:07:05.582 "dma_device_id": "system", 00:07:05.582 "dma_device_type": 1 00:07:05.582 } 00:07:05.582 ], 00:07:05.582 "driver_specific": { 00:07:05.582 "nvme": [ 00:07:05.582 { 00:07:05.582 "trid": { 00:07:05.582 "trtype": "TCP", 00:07:05.582 "adrfam": "IPv4", 00:07:05.582 "traddr": "10.0.0.3", 00:07:05.582 "trsvcid": "4420", 00:07:05.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:05.582 }, 00:07:05.582 "ctrlr_data": { 00:07:05.582 "cntlid": 1, 00:07:05.582 "vendor_id": "0x8086", 00:07:05.582 "model_number": "SPDK bdev Controller", 00:07:05.582 "serial_number": "SPDK0", 00:07:05.582 "firmware_revision": "25.01", 00:07:05.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:05.582 "oacs": { 00:07:05.582 "security": 0, 00:07:05.582 "format": 0, 00:07:05.582 "firmware": 0, 00:07:05.582 "ns_manage": 0 00:07:05.582 }, 00:07:05.582 "multi_ctrlr": true, 00:07:05.582 "ana_reporting": false 00:07:05.582 }, 00:07:05.582 "vs": { 00:07:05.582 "nvme_version": "1.3" 00:07:05.582 }, 00:07:05.582 "ns_data": { 00:07:05.582 "id": 1, 00:07:05.582 "can_share": true 00:07:05.582 } 00:07:05.582 } 00:07:05.582 ], 00:07:05.582 "mp_policy": "active_passive" 00:07:05.582 } 00:07:05.582 } 00:07:05.582 ] 00:07:05.582 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:05.582 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63311 00:07:05.582 13:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:05.582 Running I/O for 10 seconds... 00:07:06.959 Latency(us) 00:07:06.959 [2024-11-17T13:16:56.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.959 Nvme0n1 : 1.00 7902.00 30.87 0.00 0.00 0.00 0.00 0.00 00:07:06.959 [2024-11-17T13:16:56.183Z] =================================================================================================================== 00:07:06.959 [2024-11-17T13:16:56.183Z] Total : 7902.00 30.87 0.00 0.00 0.00 0.00 0.00 00:07:06.959 00:07:07.527 13:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:07.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.786 Nvme0n1 : 2.00 7951.50 31.06 0.00 0.00 0.00 0.00 0.00 00:07:07.786 [2024-11-17T13:16:57.010Z] =================================================================================================================== 00:07:07.786 [2024-11-17T13:16:57.010Z] Total : 7951.50 31.06 0.00 0.00 0.00 0.00 0.00 00:07:07.786 00:07:08.045 true 00:07:08.045 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:08.045 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:08.304 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:08.304 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:08.304 13:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63311 00:07:08.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.870 Nvme0n1 : 3.00 7883.33 30.79 0.00 0.00 0.00 0.00 0.00 00:07:08.870 [2024-11-17T13:16:58.094Z] =================================================================================================================== 00:07:08.870 [2024-11-17T13:16:58.094Z] Total : 7883.33 30.79 0.00 0.00 0.00 0.00 0.00 00:07:08.870 00:07:09.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.806 Nvme0n1 : 4.00 7912.75 30.91 0.00 0.00 0.00 0.00 0.00 00:07:09.806 [2024-11-17T13:16:59.030Z] =================================================================================================================== 00:07:09.806 [2024-11-17T13:16:59.030Z] Total : 7912.75 30.91 0.00 0.00 0.00 0.00 0.00 00:07:09.806 00:07:10.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.751 Nvme0n1 : 5.00 7769.00 30.35 0.00 0.00 0.00 0.00 0.00 00:07:10.751 [2024-11-17T13:16:59.975Z] =================================================================================================================== 00:07:10.751 [2024-11-17T13:16:59.975Z] Total : 7769.00 30.35 0.00 0.00 0.00 0.00 0.00 00:07:10.751 00:07:11.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.687 Nvme0n1 : 6.00 7786.50 30.42 0.00 0.00 0.00 0.00 0.00 00:07:11.687 [2024-11-17T13:17:00.911Z] =================================================================================================================== 00:07:11.687 [2024-11-17T13:17:00.911Z] Total : 7786.50 30.42 0.00 0.00 0.00 0.00 0.00 00:07:11.687 00:07:12.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.624 Nvme0n1 : 7.00 7799.00 30.46 0.00 0.00 0.00 0.00 0.00 00:07:12.624 [2024-11-17T13:17:01.848Z] =================================================================================================================== 00:07:12.624 [2024-11-17T13:17:01.848Z] Total : 7799.00 30.46 0.00 0.00 0.00 0.00 0.00 00:07:12.624 00:07:14.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.001 Nvme0n1 : 8.00 7776.62 30.38 0.00 0.00 0.00 0.00 0.00 00:07:14.001 [2024-11-17T13:17:03.225Z] =================================================================================================================== 00:07:14.001 [2024-11-17T13:17:03.225Z] Total : 7776.62 30.38 0.00 0.00 0.00 0.00 0.00 00:07:14.001 00:07:14.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.947 Nvme0n1 : 9.00 7773.33 30.36 0.00 0.00 0.00 0.00 0.00 00:07:14.947 [2024-11-17T13:17:04.171Z] =================================================================================================================== 00:07:14.947 [2024-11-17T13:17:04.171Z] Total : 7773.33 30.36 0.00 0.00 0.00 0.00 0.00 00:07:14.947 00:07:15.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.919 Nvme0n1 : 10.00 7758.00 30.30 0.00 0.00 0.00 0.00 0.00 00:07:15.919 [2024-11-17T13:17:05.143Z] =================================================================================================================== 00:07:15.919 [2024-11-17T13:17:05.143Z] Total : 7758.00 30.30 0.00 0.00 0.00 0.00 0.00 00:07:15.919 00:07:15.919 00:07:15.919 Latency(us) 00:07:15.919 [2024-11-17T13:17:05.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.919 Nvme0n1 : 10.00 7768.52 30.35 0.00 0.00 16472.35 3619.37 116773.24 00:07:15.919 [2024-11-17T13:17:05.143Z] =================================================================================================================== 00:07:15.919 [2024-11-17T13:17:05.143Z] Total : 7768.52 30.35 0.00 0.00 16472.35 3619.37 116773.24 00:07:15.919 { 00:07:15.919 "results": [ 00:07:15.919 { 00:07:15.919 "job": "Nvme0n1", 00:07:15.919 "core_mask": "0x2", 00:07:15.919 "workload": "randwrite", 00:07:15.919 "status": "finished", 00:07:15.919 "queue_depth": 128, 00:07:15.919 "io_size": 4096, 00:07:15.919 "runtime": 10.002933, 00:07:15.919 "iops": 7768.521492646207, 00:07:15.919 "mibps": 30.345787080649245, 00:07:15.919 "io_failed": 0, 00:07:15.919 "io_timeout": 0, 00:07:15.919 "avg_latency_us": 16472.354183727428, 00:07:15.919 "min_latency_us": 3619.3745454545456, 00:07:15.919 "max_latency_us": 116773.23636363636 00:07:15.919 } 00:07:15.919 ], 00:07:15.919 "core_count": 1 00:07:15.919 } 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63287 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63287 ']' 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63287 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63287 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.919 killing process with pid 63287 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63287' 00:07:15.919 Received shutdown signal, test time was about 10.000000 seconds 00:07:15.919 00:07:15.919 Latency(us) 00:07:15.919 [2024-11-17T13:17:05.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.919 [2024-11-17T13:17:05.143Z] =================================================================================================================== 00:07:15.919 [2024-11-17T13:17:05.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63287 00:07:15.919 13:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63287 00:07:15.919 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:16.179 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.437 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:16.437 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:16.696 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:16.696 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:16.696 13:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.955 [2024-11-17 13:17:06.050316] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:16.955 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:17.213 request: 00:07:17.213 { 00:07:17.213 "uuid": "1a3f8846-3100-4b3d-aec3-a4d17cb2dca4", 00:07:17.213 "method": "bdev_lvol_get_lvstores", 00:07:17.213 "req_id": 1 00:07:17.213 } 00:07:17.213 Got JSON-RPC error response 00:07:17.213 response: 00:07:17.213 { 00:07:17.213 "code": -19, 00:07:17.213 "message": "No such device" 00:07:17.213 } 00:07:17.213 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:17.213 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.213 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.213 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.213 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.474 aio_bdev 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f0c60b13-39e5-4e09-9c9d-7437847055a9 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f0c60b13-39e5-4e09-9c9d-7437847055a9 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.474 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.734 13:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0c60b13-39e5-4e09-9c9d-7437847055a9 -t 2000 00:07:17.993 [ 00:07:17.993 { 00:07:17.993 "name": "f0c60b13-39e5-4e09-9c9d-7437847055a9", 00:07:17.993 "aliases": [ 00:07:17.993 "lvs/lvol" 00:07:17.993 ], 00:07:17.993 "product_name": "Logical Volume", 00:07:17.993 "block_size": 4096, 00:07:17.993 "num_blocks": 38912, 00:07:17.993 "uuid": "f0c60b13-39e5-4e09-9c9d-7437847055a9", 00:07:17.993 "assigned_rate_limits": { 00:07:17.993 "rw_ios_per_sec": 0, 00:07:17.993 "rw_mbytes_per_sec": 0, 00:07:17.993 "r_mbytes_per_sec": 0, 00:07:17.993 "w_mbytes_per_sec": 0 00:07:17.993 }, 00:07:17.993 "claimed": false, 00:07:17.993 "zoned": false, 00:07:17.993 "supported_io_types": { 00:07:17.993 "read": true, 00:07:17.993 "write": true, 00:07:17.993 "unmap": true, 00:07:17.993 "flush": false, 00:07:17.993 "reset": true, 00:07:17.993 "nvme_admin": false, 00:07:17.993 "nvme_io": false, 00:07:17.993 "nvme_io_md": false, 00:07:17.993 "write_zeroes": true, 00:07:17.993 "zcopy": false, 00:07:17.993 "get_zone_info": false, 00:07:17.993 "zone_management": false, 00:07:17.993 "zone_append": false, 00:07:17.993 "compare": false, 00:07:17.993 "compare_and_write": false, 00:07:17.993 "abort": false, 00:07:17.993 "seek_hole": true, 00:07:17.993 "seek_data": true, 00:07:17.993 "copy": false, 00:07:17.993 "nvme_iov_md": false 00:07:17.993 }, 00:07:17.993 "driver_specific": { 00:07:17.993 "lvol": { 00:07:17.993 "lvol_store_uuid": "1a3f8846-3100-4b3d-aec3-a4d17cb2dca4", 00:07:17.993 "base_bdev": "aio_bdev", 00:07:17.993 "thin_provision": false, 00:07:17.993 "num_allocated_clusters": 38, 00:07:17.993 "snapshot": false, 00:07:17.993 "clone": false, 00:07:17.993 "esnap_clone": false 00:07:17.993 } 00:07:17.993 } 00:07:17.993 } 00:07:17.993 ] 00:07:17.993 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:17.993 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:17.993 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:18.252 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:18.252 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:18.252 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:18.511 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:18.511 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f0c60b13-39e5-4e09-9c9d-7437847055a9 00:07:18.768 13:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a3f8846-3100-4b3d-aec3-a4d17cb2dca4 00:07:19.028 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.288 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:19.856 ************************************ 00:07:19.856 END TEST lvs_grow_clean 00:07:19.856 ************************************ 00:07:19.856 00:07:19.856 real 0m18.232s 00:07:19.856 user 0m17.188s 00:07:19.856 sys 0m2.474s 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.856 ************************************ 00:07:19.856 START TEST lvs_grow_dirty 00:07:19.856 ************************************ 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:19.856 13:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.115 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:20.115 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:20.374 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:20.374 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:20.374 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:20.941 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:20.941 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:20.941 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bf67d300-7444-44ac-94da-7bc8c2b848f9 lvol 150 00:07:21.199 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:21.199 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:21.199 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:21.458 [2024-11-17 13:17:10.507947] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:21.458 [2024-11-17 13:17:10.508029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:21.458 true 00:07:21.458 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:21.458 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:21.716 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:21.716 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.975 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:22.233 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:22.491 [2024-11-17 13:17:11.616588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:22.491 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63571 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63571 /var/tmp/bdevperf.sock 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63571 ']' 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.749 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.007 [2024-11-17 13:17:11.972605] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:23.007 [2024-11-17 13:17:11.972699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63571 ] 00:07:23.007 [2024-11-17 13:17:12.127101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.007 [2024-11-17 13:17:12.186189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.265 [2024-11-17 13:17:12.242942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.832 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.832 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:23.832 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.091 Nvme0n1 00:07:24.091 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:24.349 [ 00:07:24.349 { 00:07:24.349 "name": "Nvme0n1", 00:07:24.349 "aliases": [ 00:07:24.349 "65615652-ff45-4a3d-b0b0-6aff3d4360aa" 00:07:24.349 ], 00:07:24.349 "product_name": "NVMe disk", 00:07:24.349 "block_size": 4096, 00:07:24.349 "num_blocks": 38912, 00:07:24.349 "uuid": "65615652-ff45-4a3d-b0b0-6aff3d4360aa", 00:07:24.349 "numa_id": -1, 00:07:24.349 "assigned_rate_limits": { 00:07:24.349 "rw_ios_per_sec": 0, 00:07:24.349 "rw_mbytes_per_sec": 0, 00:07:24.349 "r_mbytes_per_sec": 0, 00:07:24.349 "w_mbytes_per_sec": 0 00:07:24.349 }, 00:07:24.349 "claimed": false, 00:07:24.349 "zoned": false, 00:07:24.349 "supported_io_types": { 00:07:24.349 "read": true, 00:07:24.349 "write": true, 00:07:24.349 "unmap": true, 00:07:24.349 "flush": true, 00:07:24.349 "reset": true, 00:07:24.349 "nvme_admin": true, 00:07:24.349 "nvme_io": true, 00:07:24.349 "nvme_io_md": false, 00:07:24.349 "write_zeroes": true, 00:07:24.349 "zcopy": false, 00:07:24.349 "get_zone_info": false, 00:07:24.349 "zone_management": false, 00:07:24.349 "zone_append": false, 00:07:24.349 "compare": true, 00:07:24.349 "compare_and_write": true, 00:07:24.349 "abort": true, 00:07:24.349 "seek_hole": false, 00:07:24.349 "seek_data": false, 00:07:24.349 "copy": true, 00:07:24.349 "nvme_iov_md": false 00:07:24.349 }, 00:07:24.349 "memory_domains": [ 00:07:24.349 { 00:07:24.349 "dma_device_id": "system", 00:07:24.349 "dma_device_type": 1 00:07:24.349 } 00:07:24.349 ], 00:07:24.349 "driver_specific": { 00:07:24.349 "nvme": [ 00:07:24.349 { 00:07:24.349 "trid": { 00:07:24.349 "trtype": "TCP", 00:07:24.349 "adrfam": "IPv4", 00:07:24.349 "traddr": "10.0.0.3", 00:07:24.349 "trsvcid": "4420", 00:07:24.349 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:24.349 }, 00:07:24.349 "ctrlr_data": { 00:07:24.349 "cntlid": 1, 00:07:24.349 "vendor_id": "0x8086", 00:07:24.349 "model_number": "SPDK bdev Controller", 00:07:24.349 "serial_number": "SPDK0", 00:07:24.349 "firmware_revision": "25.01", 00:07:24.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.349 "oacs": { 00:07:24.349 "security": 0, 00:07:24.349 "format": 0, 00:07:24.349 "firmware": 0, 00:07:24.350 "ns_manage": 0 00:07:24.350 }, 00:07:24.350 "multi_ctrlr": true, 00:07:24.350 "ana_reporting": false 00:07:24.350 }, 00:07:24.350 "vs": { 00:07:24.350 "nvme_version": "1.3" 00:07:24.350 }, 00:07:24.350 "ns_data": { 00:07:24.350 "id": 1, 00:07:24.350 "can_share": true 00:07:24.350 } 00:07:24.350 } 00:07:24.350 ], 00:07:24.350 "mp_policy": "active_passive" 00:07:24.350 } 00:07:24.350 } 00:07:24.350 ] 00:07:24.350 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63589 00:07:24.350 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.350 13:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:24.608 Running I/O for 10 seconds... 00:07:25.543 Latency(us) 00:07:25.543 [2024-11-17T13:17:14.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.543 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:25.543 [2024-11-17T13:17:14.767Z] =================================================================================================================== 00:07:25.543 [2024-11-17T13:17:14.767Z] Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:25.543 00:07:26.477 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:26.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.477 Nvme0n1 : 2.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:26.477 [2024-11-17T13:17:15.701Z] =================================================================================================================== 00:07:26.477 [2024-11-17T13:17:15.701Z] Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:26.477 00:07:26.735 true 00:07:26.735 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:26.735 13:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:26.994 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:26.994 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:26.994 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63589 00:07:27.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.560 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:27.560 [2024-11-17T13:17:16.785Z] =================================================================================================================== 00:07:27.561 [2024-11-17T13:17:16.785Z] Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:27.561 00:07:28.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.495 Nvme0n1 : 4.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:28.495 [2024-11-17T13:17:17.719Z] =================================================================================================================== 00:07:28.495 [2024-11-17T13:17:17.719Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:28.495 00:07:29.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.474 Nvme0n1 : 5.00 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:07:29.474 [2024-11-17T13:17:18.698Z] =================================================================================================================== 00:07:29.474 [2024-11-17T13:17:18.698Z] Total : 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:07:29.474 00:07:30.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.849 Nvme0n1 : 6.00 7281.17 28.44 0.00 0.00 0.00 0.00 0.00 00:07:30.849 [2024-11-17T13:17:20.073Z] =================================================================================================================== 00:07:30.849 [2024-11-17T13:17:20.073Z] Total : 7281.17 28.44 0.00 0.00 0.00 0.00 0.00 00:07:30.849 00:07:31.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.785 Nvme0n1 : 7.00 7220.71 28.21 0.00 0.00 0.00 0.00 0.00 00:07:31.785 [2024-11-17T13:17:21.009Z] =================================================================================================================== 00:07:31.785 [2024-11-17T13:17:21.009Z] Total : 7220.71 28.21 0.00 0.00 0.00 0.00 0.00 00:07:31.785 00:07:32.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.721 Nvme0n1 : 8.00 7191.25 28.09 0.00 0.00 0.00 0.00 0.00 00:07:32.721 [2024-11-17T13:17:21.945Z] =================================================================================================================== 00:07:32.721 [2024-11-17T13:17:21.945Z] Total : 7191.25 28.09 0.00 0.00 0.00 0.00 0.00 00:07:32.721 00:07:33.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.657 Nvme0n1 : 9.00 7182.44 28.06 0.00 0.00 0.00 0.00 0.00 00:07:33.657 [2024-11-17T13:17:22.881Z] =================================================================================================================== 00:07:33.657 [2024-11-17T13:17:22.881Z] Total : 7182.44 28.06 0.00 0.00 0.00 0.00 0.00 00:07:33.657 00:07:34.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.591 Nvme0n1 : 10.00 7137.30 27.88 0.00 0.00 0.00 0.00 0.00 00:07:34.591 [2024-11-17T13:17:23.815Z] =================================================================================================================== 00:07:34.591 [2024-11-17T13:17:23.815Z] Total : 7137.30 27.88 0.00 0.00 0.00 0.00 0.00 00:07:34.591 00:07:34.591 00:07:34.591 Latency(us) 00:07:34.591 [2024-11-17T13:17:23.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.591 Nvme0n1 : 10.02 7139.06 27.89 0.00 0.00 17924.06 5808.87 62914.56 00:07:34.591 [2024-11-17T13:17:23.815Z] =================================================================================================================== 00:07:34.591 [2024-11-17T13:17:23.815Z] Total : 7139.06 27.89 0.00 0.00 17924.06 5808.87 62914.56 00:07:34.591 { 00:07:34.591 "results": [ 00:07:34.591 { 00:07:34.591 "job": "Nvme0n1", 00:07:34.591 "core_mask": "0x2", 00:07:34.591 "workload": "randwrite", 00:07:34.591 "status": "finished", 00:07:34.591 "queue_depth": 128, 00:07:34.591 "io_size": 4096, 00:07:34.591 "runtime": 10.015465, 00:07:34.591 "iops": 7139.059444568974, 00:07:34.591 "mibps": 27.886950955347555, 00:07:34.591 "io_failed": 0, 00:07:34.591 "io_timeout": 0, 00:07:34.591 "avg_latency_us": 17924.061753440194, 00:07:34.591 "min_latency_us": 5808.872727272727, 00:07:34.591 "max_latency_us": 62914.56 00:07:34.591 } 00:07:34.591 ], 00:07:34.591 "core_count": 1 00:07:34.591 } 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63571 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63571 ']' 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63571 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63571 00:07:34.591 killing process with pid 63571 00:07:34.591 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.591 00:07:34.591 Latency(us) 00:07:34.591 [2024-11-17T13:17:23.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.591 [2024-11-17T13:17:23.815Z] =================================================================================================================== 00:07:34.591 [2024-11-17T13:17:23.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:34.591 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:34.592 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63571' 00:07:34.592 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63571 00:07:34.592 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63571 00:07:34.850 13:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:35.109 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.366 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:35.366 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63205 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63205 00:07:35.933 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63205 Killed "${NVMF_APP[@]}" "$@" 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.933 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63727 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63727 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63727 ']' 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.934 13:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.934 [2024-11-17 13:17:24.974492] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:35.934 [2024-11-17 13:17:24.974598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.934 [2024-11-17 13:17:25.125975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.192 [2024-11-17 13:17:25.178654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.192 [2024-11-17 13:17:25.178698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.192 [2024-11-17 13:17:25.178708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.192 [2024-11-17 13:17:25.178715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.192 [2024-11-17 13:17:25.178721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.192 [2024-11-17 13:17:25.179136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.192 [2024-11-17 13:17:25.231248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.759 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.017 13:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.276 [2024-11-17 13:17:26.263397] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:37.276 [2024-11-17 13:17:26.263734] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:37.276 [2024-11-17 13:17:26.263965] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.276 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.535 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65615652-ff45-4a3d-b0b0-6aff3d4360aa -t 2000 00:07:37.794 [ 00:07:37.794 { 00:07:37.794 "name": "65615652-ff45-4a3d-b0b0-6aff3d4360aa", 00:07:37.794 "aliases": [ 00:07:37.794 "lvs/lvol" 00:07:37.794 ], 00:07:37.794 "product_name": "Logical Volume", 00:07:37.794 "block_size": 4096, 00:07:37.794 "num_blocks": 38912, 00:07:37.794 "uuid": "65615652-ff45-4a3d-b0b0-6aff3d4360aa", 00:07:37.794 "assigned_rate_limits": { 00:07:37.794 "rw_ios_per_sec": 0, 00:07:37.794 "rw_mbytes_per_sec": 0, 00:07:37.794 "r_mbytes_per_sec": 0, 00:07:37.794 "w_mbytes_per_sec": 0 00:07:37.794 }, 00:07:37.794 "claimed": false, 00:07:37.794 "zoned": false, 00:07:37.794 "supported_io_types": { 00:07:37.794 "read": true, 00:07:37.794 "write": true, 00:07:37.794 "unmap": true, 00:07:37.794 "flush": false, 00:07:37.794 "reset": true, 00:07:37.794 "nvme_admin": false, 00:07:37.794 "nvme_io": false, 00:07:37.794 "nvme_io_md": false, 00:07:37.794 "write_zeroes": true, 00:07:37.794 "zcopy": false, 00:07:37.794 "get_zone_info": false, 00:07:37.794 "zone_management": false, 00:07:37.794 "zone_append": false, 00:07:37.794 "compare": false, 00:07:37.794 "compare_and_write": false, 00:07:37.794 "abort": false, 00:07:37.794 "seek_hole": true, 00:07:37.794 "seek_data": true, 00:07:37.794 "copy": false, 00:07:37.794 "nvme_iov_md": false 00:07:37.794 }, 00:07:37.794 "driver_specific": { 00:07:37.794 "lvol": { 00:07:37.794 "lvol_store_uuid": "bf67d300-7444-44ac-94da-7bc8c2b848f9", 00:07:37.794 "base_bdev": "aio_bdev", 00:07:37.794 "thin_provision": false, 00:07:37.794 "num_allocated_clusters": 38, 00:07:37.794 "snapshot": false, 00:07:37.794 "clone": false, 00:07:37.794 "esnap_clone": false 00:07:37.794 } 00:07:37.794 } 00:07:37.794 } 00:07:37.794 ] 00:07:37.794 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:37.794 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:37.794 13:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:38.052 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:38.052 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:38.052 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.311 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.311 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.571 [2024-11-17 13:17:27.681247] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.571 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:38.830 request: 00:07:38.830 { 00:07:38.830 "uuid": "bf67d300-7444-44ac-94da-7bc8c2b848f9", 00:07:38.830 "method": "bdev_lvol_get_lvstores", 00:07:38.830 "req_id": 1 00:07:38.830 } 00:07:38.830 Got JSON-RPC error response 00:07:38.830 response: 00:07:38.830 { 00:07:38.830 "code": -19, 00:07:38.830 "message": "No such device" 00:07:38.830 } 00:07:38.830 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:38.830 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.830 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.830 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.830 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.088 aio_bdev 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.088 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.347 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65615652-ff45-4a3d-b0b0-6aff3d4360aa -t 2000 00:07:39.606 [ 00:07:39.606 { 00:07:39.606 "name": "65615652-ff45-4a3d-b0b0-6aff3d4360aa", 00:07:39.606 "aliases": [ 00:07:39.606 "lvs/lvol" 00:07:39.606 ], 00:07:39.606 "product_name": "Logical Volume", 00:07:39.606 "block_size": 4096, 00:07:39.606 "num_blocks": 38912, 00:07:39.606 "uuid": "65615652-ff45-4a3d-b0b0-6aff3d4360aa", 00:07:39.606 "assigned_rate_limits": { 00:07:39.606 "rw_ios_per_sec": 0, 00:07:39.606 "rw_mbytes_per_sec": 0, 00:07:39.606 "r_mbytes_per_sec": 0, 00:07:39.606 "w_mbytes_per_sec": 0 00:07:39.606 }, 00:07:39.606 "claimed": false, 00:07:39.606 "zoned": false, 00:07:39.606 "supported_io_types": { 00:07:39.606 "read": true, 00:07:39.606 "write": true, 00:07:39.606 "unmap": true, 00:07:39.606 "flush": false, 00:07:39.606 "reset": true, 00:07:39.606 "nvme_admin": false, 00:07:39.606 "nvme_io": false, 00:07:39.606 "nvme_io_md": false, 00:07:39.606 "write_zeroes": true, 00:07:39.606 "zcopy": false, 00:07:39.606 "get_zone_info": false, 00:07:39.606 "zone_management": false, 00:07:39.606 "zone_append": false, 00:07:39.606 "compare": false, 00:07:39.606 "compare_and_write": false, 00:07:39.606 "abort": false, 00:07:39.606 "seek_hole": true, 00:07:39.606 "seek_data": true, 00:07:39.606 "copy": false, 00:07:39.606 "nvme_iov_md": false 00:07:39.606 }, 00:07:39.606 "driver_specific": { 00:07:39.606 "lvol": { 00:07:39.606 "lvol_store_uuid": "bf67d300-7444-44ac-94da-7bc8c2b848f9", 00:07:39.606 "base_bdev": "aio_bdev", 00:07:39.606 "thin_provision": false, 00:07:39.606 "num_allocated_clusters": 38, 00:07:39.606 "snapshot": false, 00:07:39.606 "clone": false, 00:07:39.606 "esnap_clone": false 00:07:39.606 } 00:07:39.606 } 00:07:39.606 } 00:07:39.606 ] 00:07:39.606 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.606 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:39.606 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.865 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.865 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:39.865 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.133 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.133 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 65615652-ff45-4a3d-b0b0-6aff3d4360aa 00:07:40.396 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf67d300-7444-44ac-94da-7bc8c2b848f9 00:07:40.962 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.962 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:41.529 ************************************ 00:07:41.529 END TEST lvs_grow_dirty 00:07:41.529 ************************************ 00:07:41.529 00:07:41.529 real 0m21.659s 00:07:41.529 user 0m44.306s 00:07:41.529 sys 0m8.501s 00:07:41.529 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.529 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:41.530 nvmf_trace.0 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.530 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.789 rmmod nvme_tcp 00:07:41.789 rmmod nvme_fabrics 00:07:41.789 rmmod nvme_keyring 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63727 ']' 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63727 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63727 ']' 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63727 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63727 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63727' 00:07:41.789 killing process with pid 63727 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63727 00:07:41.789 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63727 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:42.048 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:42.049 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:42.049 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:42.049 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:42.049 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.049 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:42.308 00:07:42.308 real 0m42.663s 00:07:42.308 user 1m8.330s 00:07:42.308 sys 0m11.744s 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.308 ************************************ 00:07:42.308 END TEST nvmf_lvs_grow 00:07:42.308 ************************************ 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.308 ************************************ 00:07:42.308 START TEST nvmf_bdev_io_wait 00:07:42.308 ************************************ 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.308 * Looking for test storage... 00:07:42.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.308 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.568 --rc genhtml_branch_coverage=1 00:07:42.568 --rc genhtml_function_coverage=1 00:07:42.568 --rc genhtml_legend=1 00:07:42.568 --rc geninfo_all_blocks=1 00:07:42.568 --rc geninfo_unexecuted_blocks=1 00:07:42.568 00:07:42.568 ' 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.568 --rc genhtml_branch_coverage=1 00:07:42.568 --rc genhtml_function_coverage=1 00:07:42.568 --rc genhtml_legend=1 00:07:42.568 --rc geninfo_all_blocks=1 00:07:42.568 --rc geninfo_unexecuted_blocks=1 00:07:42.568 00:07:42.568 ' 00:07:42.568 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.568 --rc genhtml_branch_coverage=1 00:07:42.568 --rc genhtml_function_coverage=1 00:07:42.569 --rc genhtml_legend=1 00:07:42.569 --rc geninfo_all_blocks=1 00:07:42.569 --rc geninfo_unexecuted_blocks=1 00:07:42.569 00:07:42.569 ' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.569 --rc genhtml_branch_coverage=1 00:07:42.569 --rc genhtml_function_coverage=1 00:07:42.569 --rc genhtml_legend=1 00:07:42.569 --rc geninfo_all_blocks=1 00:07:42.569 --rc geninfo_unexecuted_blocks=1 00:07:42.569 00:07:42.569 ' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.569 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:42.569 Cannot find device "nvmf_init_br" 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:42.569 Cannot find device "nvmf_init_br2" 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:42.569 Cannot find device "nvmf_tgt_br" 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.569 Cannot find device "nvmf_tgt_br2" 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:42.569 Cannot find device "nvmf_init_br" 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:42.569 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:42.570 Cannot find device "nvmf_init_br2" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:42.570 Cannot find device "nvmf_tgt_br" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:42.570 Cannot find device "nvmf_tgt_br2" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:42.570 Cannot find device "nvmf_br" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:42.570 Cannot find device "nvmf_init_if" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:42.570 Cannot find device "nvmf_init_if2" 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.570 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:42.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:42.829 00:07:42.829 --- 10.0.0.3 ping statistics --- 00:07:42.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.829 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:42.829 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:42.829 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:07:42.829 00:07:42.829 --- 10.0.0.4 ping statistics --- 00:07:42.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.829 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:42.829 00:07:42.829 --- 10.0.0.1 ping statistics --- 00:07:42.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.829 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:42.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:42.829 00:07:42.829 --- 10.0.0.2 ping statistics --- 00:07:42.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.829 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.829 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.830 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.830 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.830 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64100 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64100 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64100 ']' 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.830 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.088 [2024-11-17 13:17:32.067663] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.088 [2024-11-17 13:17:32.067748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.088 [2024-11-17 13:17:32.220164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.088 [2024-11-17 13:17:32.283313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.088 [2024-11-17 13:17:32.283373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.088 [2024-11-17 13:17:32.283387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.088 [2024-11-17 13:17:32.283397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.088 [2024-11-17 13:17:32.283406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.088 [2024-11-17 13:17:32.284798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.088 [2024-11-17 13:17:32.284864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.088 [2024-11-17 13:17:32.285007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.088 [2024-11-17 13:17:32.285014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 [2024-11-17 13:17:32.465456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 [2024-11-17 13:17:32.482202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 Malloc0 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.351 [2024-11-17 13:17:32.540328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64129 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64131 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.351 { 00:07:43.351 "params": { 00:07:43.351 "name": "Nvme$subsystem", 00:07:43.351 "trtype": "$TEST_TRANSPORT", 00:07:43.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.351 "adrfam": "ipv4", 00:07:43.351 "trsvcid": "$NVMF_PORT", 00:07:43.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.351 "hdgst": ${hdgst:-false}, 00:07:43.351 "ddgst": ${ddgst:-false} 00:07:43.351 }, 00:07:43.351 "method": "bdev_nvme_attach_controller" 00:07:43.351 } 00:07:43.351 EOF 00:07:43.351 )") 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64133 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.351 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.351 { 00:07:43.351 "params": { 00:07:43.352 "name": "Nvme$subsystem", 00:07:43.352 "trtype": "$TEST_TRANSPORT", 00:07:43.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.352 "adrfam": "ipv4", 00:07:43.352 "trsvcid": "$NVMF_PORT", 00:07:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.352 "hdgst": ${hdgst:-false}, 00:07:43.352 "ddgst": ${ddgst:-false} 00:07:43.352 }, 00:07:43.352 "method": "bdev_nvme_attach_controller" 00:07:43.352 } 00:07:43.352 EOF 00:07:43.352 )") 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.352 { 00:07:43.352 "params": { 00:07:43.352 "name": "Nvme$subsystem", 00:07:43.352 "trtype": "$TEST_TRANSPORT", 00:07:43.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.352 "adrfam": "ipv4", 00:07:43.352 "trsvcid": "$NVMF_PORT", 00:07:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.352 "hdgst": ${hdgst:-false}, 00:07:43.352 "ddgst": ${ddgst:-false} 00:07:43.352 }, 00:07:43.352 "method": "bdev_nvme_attach_controller" 00:07:43.352 } 00:07:43.352 EOF 00:07:43.352 )") 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.352 { 00:07:43.352 "params": { 00:07:43.352 "name": "Nvme$subsystem", 00:07:43.352 "trtype": "$TEST_TRANSPORT", 00:07:43.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.352 "adrfam": "ipv4", 00:07:43.352 "trsvcid": "$NVMF_PORT", 00:07:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.352 "hdgst": ${hdgst:-false}, 00:07:43.352 "ddgst": ${ddgst:-false} 00:07:43.352 }, 00:07:43.352 "method": "bdev_nvme_attach_controller" 00:07:43.352 } 00:07:43.352 EOF 00:07:43.352 )") 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64136 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.352 "params": { 00:07:43.352 "name": "Nvme1", 00:07:43.352 "trtype": "tcp", 00:07:43.352 "traddr": "10.0.0.3", 00:07:43.352 "adrfam": "ipv4", 00:07:43.352 "trsvcid": "4420", 00:07:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:43.352 "hdgst": false, 00:07:43.352 "ddgst": false 00:07:43.352 }, 00:07:43.352 "method": "bdev_nvme_attach_controller" 00:07:43.352 }' 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.352 "params": { 00:07:43.352 "name": "Nvme1", 00:07:43.352 "trtype": "tcp", 00:07:43.352 "traddr": "10.0.0.3", 00:07:43.352 "adrfam": "ipv4", 00:07:43.352 "trsvcid": "4420", 00:07:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:43.352 "hdgst": false, 00:07:43.352 "ddgst": false 00:07:43.352 }, 00:07:43.352 "method": "bdev_nvme_attach_controller" 00:07:43.352 }' 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:43.352 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:43.616 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:43.616 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.616 "params": { 00:07:43.616 "name": "Nvme1", 00:07:43.616 "trtype": "tcp", 00:07:43.616 "traddr": "10.0.0.3", 00:07:43.616 "adrfam": "ipv4", 00:07:43.616 "trsvcid": "4420", 00:07:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:43.616 "hdgst": false, 00:07:43.616 "ddgst": false 00:07:43.616 }, 00:07:43.616 "method": "bdev_nvme_attach_controller" 00:07:43.616 }' 00:07:43.616 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:43.616 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.616 "params": { 00:07:43.616 "name": "Nvme1", 00:07:43.616 "trtype": "tcp", 00:07:43.616 "traddr": "10.0.0.3", 00:07:43.616 "adrfam": "ipv4", 00:07:43.616 "trsvcid": "4420", 00:07:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:43.616 "hdgst": false, 00:07:43.616 "ddgst": false 00:07:43.616 }, 00:07:43.616 "method": "bdev_nvme_attach_controller" 00:07:43.616 }' 00:07:43.616 [2024-11-17 13:17:32.624255] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.616 [2024-11-17 13:17:32.624568] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:43.616 [2024-11-17 13:17:32.630634] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.616 [2024-11-17 13:17:32.630927] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:43.616 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64129 00:07:43.616 [2024-11-17 13:17:32.635583] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.616 [2024-11-17 13:17:32.635842] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:43.616 [2024-11-17 13:17:32.641937] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.616 [2024-11-17 13:17:32.642171] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:43.875 [2024-11-17 13:17:32.843801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.875 [2024-11-17 13:17:32.898921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:43.875 [2024-11-17 13:17:32.913083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.875 [2024-11-17 13:17:32.919707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.875 [2024-11-17 13:17:32.977052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:43.875 [2024-11-17 13:17:32.989832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.875 [2024-11-17 13:17:32.994682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.875 [2024-11-17 13:17:33.049694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:43.875 Running I/O for 1 seconds... 00:07:43.875 [2024-11-17 13:17:33.063675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.875 [2024-11-17 13:17:33.077941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.133 Running I/O for 1 seconds... 00:07:44.133 [2024-11-17 13:17:33.147009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:44.133 [2024-11-17 13:17:33.161069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.133 Running I/O for 1 seconds... 00:07:44.133 Running I/O for 1 seconds... 00:07:45.069 168952.00 IOPS, 659.97 MiB/s 00:07:45.069 Latency(us) 00:07:45.069 [2024-11-17T13:17:34.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.069 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:45.069 Nvme1n1 : 1.00 168589.44 658.55 0.00 0.00 755.20 422.63 2115.03 00:07:45.069 [2024-11-17T13:17:34.293Z] =================================================================================================================== 00:07:45.069 [2024-11-17T13:17:34.293Z] Total : 168589.44 658.55 0.00 0.00 755.20 422.63 2115.03 00:07:45.069 8315.00 IOPS, 32.48 MiB/s 00:07:45.069 Latency(us) 00:07:45.069 [2024-11-17T13:17:34.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.069 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:45.069 Nvme1n1 : 1.01 8362.09 32.66 0.00 0.00 15229.34 6464.23 20494.89 00:07:45.069 [2024-11-17T13:17:34.293Z] =================================================================================================================== 00:07:45.069 [2024-11-17T13:17:34.293Z] Total : 8362.09 32.66 0.00 0.00 15229.34 6464.23 20494.89 00:07:45.069 4668.00 IOPS, 18.23 MiB/s 00:07:45.069 Latency(us) 00:07:45.069 [2024-11-17T13:17:34.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.069 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:45.069 Nvme1n1 : 1.02 4732.33 18.49 0.00 0.00 26851.32 13583.83 43611.23 00:07:45.069 [2024-11-17T13:17:34.293Z] =================================================================================================================== 00:07:45.069 [2024-11-17T13:17:34.293Z] Total : 4732.33 18.49 0.00 0.00 26851.32 13583.83 43611.23 00:07:45.328 6297.00 IOPS, 24.60 MiB/s 00:07:45.328 Latency(us) 00:07:45.328 [2024-11-17T13:17:34.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.328 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:45.328 Nvme1n1 : 1.01 6359.57 24.84 0.00 0.00 20012.69 6642.97 26691.03 00:07:45.328 [2024-11-17T13:17:34.552Z] =================================================================================================================== 00:07:45.328 [2024-11-17T13:17:34.552Z] Total : 6359.57 24.84 0.00 0.00 20012.69 6642.97 26691.03 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64131 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64133 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64136 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.328 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.328 rmmod nvme_tcp 00:07:45.328 rmmod nvme_fabrics 00:07:45.587 rmmod nvme_keyring 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64100 ']' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64100 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64100 ']' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64100 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64100 00:07:45.587 killing process with pid 64100 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64100' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64100 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64100 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.587 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:45.846 13:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:45.846 00:07:45.846 real 0m3.685s 00:07:45.846 user 0m14.644s 00:07:45.846 sys 0m2.229s 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.846 ************************************ 00:07:45.846 END TEST nvmf_bdev_io_wait 00:07:45.846 ************************************ 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.105 ************************************ 00:07:46.105 START TEST nvmf_queue_depth 00:07:46.105 ************************************ 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:46.105 * Looking for test storage... 00:07:46.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.105 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.106 --rc genhtml_branch_coverage=1 00:07:46.106 --rc genhtml_function_coverage=1 00:07:46.106 --rc genhtml_legend=1 00:07:46.106 --rc geninfo_all_blocks=1 00:07:46.106 --rc geninfo_unexecuted_blocks=1 00:07:46.106 00:07:46.106 ' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.106 --rc genhtml_branch_coverage=1 00:07:46.106 --rc genhtml_function_coverage=1 00:07:46.106 --rc genhtml_legend=1 00:07:46.106 --rc geninfo_all_blocks=1 00:07:46.106 --rc geninfo_unexecuted_blocks=1 00:07:46.106 00:07:46.106 ' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.106 --rc genhtml_branch_coverage=1 00:07:46.106 --rc genhtml_function_coverage=1 00:07:46.106 --rc genhtml_legend=1 00:07:46.106 --rc geninfo_all_blocks=1 00:07:46.106 --rc geninfo_unexecuted_blocks=1 00:07:46.106 00:07:46.106 ' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.106 --rc genhtml_branch_coverage=1 00:07:46.106 --rc genhtml_function_coverage=1 00:07:46.106 --rc genhtml_legend=1 00:07:46.106 --rc geninfo_all_blocks=1 00:07:46.106 --rc geninfo_unexecuted_blocks=1 00:07:46.106 00:07:46.106 ' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:46.106 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:46.366 Cannot find device "nvmf_init_br" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:46.366 Cannot find device "nvmf_init_br2" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:46.366 Cannot find device "nvmf_tgt_br" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.366 Cannot find device "nvmf_tgt_br2" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:46.366 Cannot find device "nvmf_init_br" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:46.366 Cannot find device "nvmf_init_br2" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:46.366 Cannot find device "nvmf_tgt_br" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:46.366 Cannot find device "nvmf_tgt_br2" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:46.366 Cannot find device "nvmf_br" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:46.366 Cannot find device "nvmf_init_if" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:46.366 Cannot find device "nvmf_init_if2" 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.366 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.367 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:46.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:07:46.626 00:07:46.626 --- 10.0.0.3 ping statistics --- 00:07:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.626 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:46.626 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:46.626 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:46.626 00:07:46.626 --- 10.0.0.4 ping statistics --- 00:07:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.626 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:07:46.626 00:07:46.626 --- 10.0.0.1 ping statistics --- 00:07:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.626 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:46.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:46.626 00:07:46.626 --- 10.0.0.2 ping statistics --- 00:07:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.626 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64395 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64395 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64395 ']' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.626 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.626 [2024-11-17 13:17:35.777615] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:46.626 [2024-11-17 13:17:35.777720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.884 [2024-11-17 13:17:35.934883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.884 [2024-11-17 13:17:35.986823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.884 [2024-11-17 13:17:35.986895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.884 [2024-11-17 13:17:35.986910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.884 [2024-11-17 13:17:35.986921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.884 [2024-11-17 13:17:35.986931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.884 [2024-11-17 13:17:35.987371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.884 [2024-11-17 13:17:36.043205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 [2024-11-17 13:17:36.783483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.820 Malloc0 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.820 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.821 [2024-11-17 13:17:36.838298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64427 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64427 /var/tmp/bdevperf.sock 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64427 ']' 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.821 13:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:47.821 [2024-11-17 13:17:36.903102] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:47.821 [2024-11-17 13:17:36.903220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64427 ] 00:07:48.080 [2024-11-17 13:17:37.054990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.080 [2024-11-17 13:17:37.111062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.080 [2024-11-17 13:17:37.169273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.080 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.080 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:48.080 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:48.080 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.080 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:48.338 NVMe0n1 00:07:48.338 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.338 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.338 Running I/O for 10 seconds... 00:07:50.215 7940.00 IOPS, 31.02 MiB/s [2024-11-17T13:17:40.817Z] 8510.00 IOPS, 33.24 MiB/s [2024-11-17T13:17:41.753Z] 8892.67 IOPS, 34.74 MiB/s [2024-11-17T13:17:42.689Z] 9231.50 IOPS, 36.06 MiB/s [2024-11-17T13:17:43.625Z] 9397.60 IOPS, 36.71 MiB/s [2024-11-17T13:17:44.561Z] 9472.17 IOPS, 37.00 MiB/s [2024-11-17T13:17:45.555Z] 9537.71 IOPS, 37.26 MiB/s [2024-11-17T13:17:46.491Z] 9585.88 IOPS, 37.44 MiB/s [2024-11-17T13:17:47.867Z] 9582.56 IOPS, 37.43 MiB/s [2024-11-17T13:17:47.867Z] 9611.10 IOPS, 37.54 MiB/s 00:07:58.643 Latency(us) 00:07:58.643 [2024-11-17T13:17:47.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.643 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:58.643 Verification LBA range: start 0x0 length 0x4000 00:07:58.643 NVMe0n1 : 10.08 9634.80 37.64 0.00 0.00 105795.12 23354.65 79596.45 00:07:58.643 [2024-11-17T13:17:47.867Z] =================================================================================================================== 00:07:58.643 [2024-11-17T13:17:47.867Z] Total : 9634.80 37.64 0.00 0.00 105795.12 23354.65 79596.45 00:07:58.643 { 00:07:58.643 "results": [ 00:07:58.643 { 00:07:58.643 "job": "NVMe0n1", 00:07:58.643 "core_mask": "0x1", 00:07:58.643 "workload": "verify", 00:07:58.643 "status": "finished", 00:07:58.643 "verify_range": { 00:07:58.643 "start": 0, 00:07:58.643 "length": 16384 00:07:58.643 }, 00:07:58.643 "queue_depth": 1024, 00:07:58.643 "io_size": 4096, 00:07:58.643 "runtime": 10.08168, 00:07:58.643 "iops": 9634.802929670452, 00:07:58.643 "mibps": 37.6359489440252, 00:07:58.643 "io_failed": 0, 00:07:58.643 "io_timeout": 0, 00:07:58.643 "avg_latency_us": 105795.11714144795, 00:07:58.643 "min_latency_us": 23354.647272727274, 00:07:58.643 "max_latency_us": 79596.45090909091 00:07:58.643 } 00:07:58.643 ], 00:07:58.643 "core_count": 1 00:07:58.643 } 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64427 ']' 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.643 killing process with pid 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64427' 00:07:58.643 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.643 00:07:58.643 Latency(us) 00:07:58.643 [2024-11-17T13:17:47.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.643 [2024-11-17T13:17:47.867Z] =================================================================================================================== 00:07:58.643 [2024-11-17T13:17:47.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64427 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.643 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.643 rmmod nvme_tcp 00:07:58.643 rmmod nvme_fabrics 00:07:58.643 rmmod nvme_keyring 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64395 ']' 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64395 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64395 ']' 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64395 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64395 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.902 killing process with pid 64395 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64395' 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64395 00:07:58.902 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64395 00:07:58.902 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.902 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.902 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.903 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:58.903 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.903 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:58.903 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:59.162 00:07:59.162 real 0m13.231s 00:07:59.162 user 0m22.062s 00:07:59.162 sys 0m2.312s 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.162 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:59.162 ************************************ 00:07:59.162 END TEST nvmf_queue_depth 00:07:59.162 ************************************ 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.422 ************************************ 00:07:59.422 START TEST nvmf_target_multipath 00:07:59.422 ************************************ 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:59.422 * Looking for test storage... 00:07:59.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.422 --rc genhtml_branch_coverage=1 00:07:59.422 --rc genhtml_function_coverage=1 00:07:59.422 --rc genhtml_legend=1 00:07:59.422 --rc geninfo_all_blocks=1 00:07:59.422 --rc geninfo_unexecuted_blocks=1 00:07:59.422 00:07:59.422 ' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.422 --rc genhtml_branch_coverage=1 00:07:59.422 --rc genhtml_function_coverage=1 00:07:59.422 --rc genhtml_legend=1 00:07:59.422 --rc geninfo_all_blocks=1 00:07:59.422 --rc geninfo_unexecuted_blocks=1 00:07:59.422 00:07:59.422 ' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.422 --rc genhtml_branch_coverage=1 00:07:59.422 --rc genhtml_function_coverage=1 00:07:59.422 --rc genhtml_legend=1 00:07:59.422 --rc geninfo_all_blocks=1 00:07:59.422 --rc geninfo_unexecuted_blocks=1 00:07:59.422 00:07:59.422 ' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.422 --rc genhtml_branch_coverage=1 00:07:59.422 --rc genhtml_function_coverage=1 00:07:59.422 --rc genhtml_legend=1 00:07:59.422 --rc geninfo_all_blocks=1 00:07:59.422 --rc geninfo_unexecuted_blocks=1 00:07:59.422 00:07:59.422 ' 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.422 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:59.423 Cannot find device "nvmf_init_br" 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:59.423 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:59.681 Cannot find device "nvmf_init_br2" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:59.681 Cannot find device "nvmf_tgt_br" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.681 Cannot find device "nvmf_tgt_br2" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:59.681 Cannot find device "nvmf_init_br" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:59.681 Cannot find device "nvmf_init_br2" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:59.681 Cannot find device "nvmf_tgt_br" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:59.681 Cannot find device "nvmf_tgt_br2" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:59.681 Cannot find device "nvmf_br" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:59.681 Cannot find device "nvmf_init_if" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:59.681 Cannot find device "nvmf_init_if2" 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.681 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.682 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:59.940 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:59.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:07:59.941 00:07:59.941 --- 10.0.0.3 ping statistics --- 00:07:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.941 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:59.941 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:59.941 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:07:59.941 00:07:59.941 --- 10.0.0.4 ping statistics --- 00:07:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.941 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:59.941 00:07:59.941 --- 10.0.0.1 ping statistics --- 00:07:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.941 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:59.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:07:59.941 00:07:59.941 --- 10.0.0.2 ping statistics --- 00:07:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.941 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.941 13:17:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64791 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64791 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64791 ']' 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.941 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.941 [2024-11-17 13:17:49.065866] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:59.941 [2024-11-17 13:17:49.065960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.198 [2024-11-17 13:17:49.216967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.198 [2024-11-17 13:17:49.271507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.199 [2024-11-17 13:17:49.271579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.199 [2024-11-17 13:17:49.271594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.199 [2024-11-17 13:17:49.271604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.199 [2024-11-17 13:17:49.271623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.199 [2024-11-17 13:17:49.272889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.199 [2024-11-17 13:17:49.273436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.199 [2024-11-17 13:17:49.273492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.199 [2024-11-17 13:17:49.273496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.199 [2024-11-17 13:17:49.331510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.199 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.199 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:00.199 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.199 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.199 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.457 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.457 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.716 [2024-11-17 13:17:49.737826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.716 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:00.974 Malloc0 00:08:00.974 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:01.233 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.491 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:01.750 [2024-11-17 13:17:50.822721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:01.750 13:17:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:02.010 [2024-11-17 13:17:51.046914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:02.010 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:02.010 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:02.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:02.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:02.270 13:17:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64873 00:08:04.175 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:04.176 13:17:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:04.176 [global] 00:08:04.176 thread=1 00:08:04.176 invalidate=1 00:08:04.176 rw=randrw 00:08:04.176 time_based=1 00:08:04.176 runtime=6 00:08:04.176 ioengine=libaio 00:08:04.176 direct=1 00:08:04.176 bs=4096 00:08:04.176 iodepth=128 00:08:04.176 norandommap=0 00:08:04.176 numjobs=1 00:08:04.176 00:08:04.176 verify_dump=1 00:08:04.176 verify_backlog=512 00:08:04.176 verify_state_save=0 00:08:04.176 do_verify=1 00:08:04.176 verify=crc32c-intel 00:08:04.176 [job0] 00:08:04.176 filename=/dev/nvme0n1 00:08:04.434 Could not set queue depth (nvme0n1) 00:08:04.434 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:04.434 fio-3.35 00:08:04.434 Starting 1 thread 00:08:05.371 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:05.630 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:05.889 13:17:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:06.148 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:06.408 13:17:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64873 00:08:10.598 00:08:10.598 job0: (groupid=0, jobs=1): err= 0: pid=64899: Sun Nov 17 13:17:59 2024 00:08:10.598 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6007msec) 00:08:10.598 slat (usec): min=2, max=5858, avg=54.42, stdev=218.14 00:08:10.598 clat (usec): min=1053, max=15027, avg=8150.51, stdev=1444.40 00:08:10.598 lat (usec): min=1062, max=15067, avg=8204.93, stdev=1449.21 00:08:10.598 clat percentiles (usec): 00:08:10.598 | 1.00th=[ 4113], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7373], 00:08:10.598 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8225], 00:08:10.598 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11469], 00:08:10.598 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13304], 99.95th=[13698], 00:08:10.598 | 99.99th=[14484] 00:08:10.598 bw ( KiB/s): min= 7688, max=28096, per=53.92%, avg=22808.67, stdev=5246.93, samples=12 00:08:10.598 iops : min= 1922, max= 7024, avg=5702.17, stdev=1311.73, samples=12 00:08:10.598 write: IOPS=6118, BW=23.9MiB/s (25.1MB/s)(134MiB/5591msec); 0 zone resets 00:08:10.598 slat (usec): min=4, max=1939, avg=64.76, stdev=157.70 00:08:10.598 clat (usec): min=594, max=15272, avg=7141.99, stdev=1289.79 00:08:10.598 lat (usec): min=629, max=15295, avg=7206.74, stdev=1293.88 00:08:10.598 clat percentiles (usec): 00:08:10.598 | 1.00th=[ 3130], 5.00th=[ 4146], 10.00th=[ 5407], 20.00th=[ 6652], 00:08:10.598 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7504], 00:08:10.598 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:08:10.598 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12518], 99.95th=[13042], 00:08:10.598 | 99.99th=[13829] 00:08:10.598 bw ( KiB/s): min= 8152, max=27416, per=93.03%, avg=22766.00, stdev=5048.42, samples=12 00:08:10.598 iops : min= 2038, max= 6854, avg=5691.50, stdev=1262.10, samples=12 00:08:10.598 lat (usec) : 750=0.01%, 1000=0.01% 00:08:10.598 lat (msec) : 2=0.02%, 4=2.04%, 10=91.87%, 20=6.06% 00:08:10.598 cpu : usr=5.29%, sys=22.09%, ctx=5575, majf=0, minf=90 00:08:10.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:10.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:10.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:10.598 issued rwts: total=63526,34207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:10.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:10.598 00:08:10.598 Run status group 0 (all jobs): 00:08:10.598 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6007-6007msec 00:08:10.598 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=134MiB (140MB), run=5591-5591msec 00:08:10.598 00:08:10.598 Disk stats (read/write): 00:08:10.598 nvme0n1: ios=62637/33600, merge=0/0, ticks=487111/224286, in_queue=711397, util=98.58% 00:08:10.598 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:10.857 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64980 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:11.116 13:18:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:11.116 [global] 00:08:11.116 thread=1 00:08:11.116 invalidate=1 00:08:11.116 rw=randrw 00:08:11.116 time_based=1 00:08:11.116 runtime=6 00:08:11.116 ioengine=libaio 00:08:11.116 direct=1 00:08:11.116 bs=4096 00:08:11.116 iodepth=128 00:08:11.116 norandommap=0 00:08:11.116 numjobs=1 00:08:11.116 00:08:11.116 verify_dump=1 00:08:11.116 verify_backlog=512 00:08:11.116 verify_state_save=0 00:08:11.116 do_verify=1 00:08:11.116 verify=crc32c-intel 00:08:11.116 [job0] 00:08:11.116 filename=/dev/nvme0n1 00:08:11.375 Could not set queue depth (nvme0n1) 00:08:11.375 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:11.375 fio-3.35 00:08:11.375 Starting 1 thread 00:08:12.335 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:12.594 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:12.852 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:13.111 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:13.370 13:18:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64980 00:08:17.557 00:08:17.557 job0: (groupid=0, jobs=1): err= 0: pid=65001: Sun Nov 17 13:18:06 2024 00:08:17.557 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(281MiB/6003msec) 00:08:17.557 slat (usec): min=3, max=5447, avg=41.20, stdev=174.77 00:08:17.557 clat (usec): min=279, max=14691, avg=7291.94, stdev=1827.80 00:08:17.557 lat (usec): min=334, max=14704, avg=7333.15, stdev=1841.01 00:08:17.557 clat percentiles (usec): 00:08:17.557 | 1.00th=[ 2737], 5.00th=[ 3654], 10.00th=[ 4621], 20.00th=[ 5932], 00:08:17.557 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:08:17.557 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8848], 95.00th=[10159], 00:08:17.557 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13304], 99.95th=[13435], 00:08:17.557 | 99.99th=[13960] 00:08:17.557 bw ( KiB/s): min=11728, max=39528, per=53.23%, avg=25530.18, stdev=8876.97, samples=11 00:08:17.557 iops : min= 2932, max= 9882, avg=6382.55, stdev=2219.24, samples=11 00:08:17.557 write: IOPS=7281, BW=28.4MiB/s (29.8MB/s)(149MiB/5240msec); 0 zone resets 00:08:17.557 slat (usec): min=12, max=1535, avg=51.72, stdev=129.93 00:08:17.557 clat (usec): min=1179, max=13364, avg=6221.02, stdev=1698.76 00:08:17.557 lat (usec): min=1203, max=13386, avg=6272.75, stdev=1712.56 00:08:17.557 clat percentiles (usec): 00:08:17.557 | 1.00th=[ 2474], 5.00th=[ 3195], 10.00th=[ 3687], 20.00th=[ 4359], 00:08:17.557 | 30.00th=[ 5211], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7046], 00:08:17.557 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:08:17.557 | 99.00th=[ 9765], 99.50th=[10683], 99.90th=[11731], 99.95th=[12256], 00:08:17.557 | 99.99th=[13173] 00:08:17.557 bw ( KiB/s): min=12360, max=40087, per=87.70%, avg=25543.91, stdev=8590.69, samples=11 00:08:17.557 iops : min= 3090, max=10021, avg=6385.91, stdev=2147.55, samples=11 00:08:17.557 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:08:17.557 lat (msec) : 2=0.24%, 4=9.02%, 10=87.00%, 20=3.68% 00:08:17.557 cpu : usr=6.11%, sys=22.13%, ctx=6205, majf=0, minf=127 00:08:17.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:08:17.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:17.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:17.557 issued rwts: total=71985,38155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:17.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:17.557 00:08:17.557 Run status group 0 (all jobs): 00:08:17.557 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=281MiB (295MB), run=6003-6003msec 00:08:17.557 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=149MiB (156MB), run=5240-5240msec 00:08:17.557 00:08:17.557 Disk stats (read/write): 00:08:17.557 nvme0n1: ios=71234/37390, merge=0/0, ticks=494917/216761, in_queue=711678, util=98.65% 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:17.558 13:18:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.816 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:18.075 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.075 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:18.075 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.075 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.075 rmmod nvme_tcp 00:08:18.075 rmmod nvme_fabrics 00:08:18.075 rmmod nvme_keyring 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64791 ']' 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64791 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64791 ']' 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64791 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64791 00:08:18.076 killing process with pid 64791 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64791' 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64791 00:08:18.076 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64791 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.335 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:18.594 00:08:18.594 real 0m19.209s 00:08:18.594 user 1m10.702s 00:08:18.594 sys 0m10.224s 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.594 ************************************ 00:08:18.594 END TEST nvmf_target_multipath 00:08:18.594 ************************************ 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.594 ************************************ 00:08:18.594 START TEST nvmf_zcopy 00:08:18.594 ************************************ 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.594 * Looking for test storage... 00:08:18.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.594 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.854 --rc genhtml_branch_coverage=1 00:08:18.854 --rc genhtml_function_coverage=1 00:08:18.854 --rc genhtml_legend=1 00:08:18.854 --rc geninfo_all_blocks=1 00:08:18.854 --rc geninfo_unexecuted_blocks=1 00:08:18.854 00:08:18.854 ' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.854 --rc genhtml_branch_coverage=1 00:08:18.854 --rc genhtml_function_coverage=1 00:08:18.854 --rc genhtml_legend=1 00:08:18.854 --rc geninfo_all_blocks=1 00:08:18.854 --rc geninfo_unexecuted_blocks=1 00:08:18.854 00:08:18.854 ' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.854 --rc genhtml_branch_coverage=1 00:08:18.854 --rc genhtml_function_coverage=1 00:08:18.854 --rc genhtml_legend=1 00:08:18.854 --rc geninfo_all_blocks=1 00:08:18.854 --rc geninfo_unexecuted_blocks=1 00:08:18.854 00:08:18.854 ' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.854 --rc genhtml_branch_coverage=1 00:08:18.854 --rc genhtml_function_coverage=1 00:08:18.854 --rc genhtml_legend=1 00:08:18.854 --rc geninfo_all_blocks=1 00:08:18.854 --rc geninfo_unexecuted_blocks=1 00:08:18.854 00:08:18.854 ' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.854 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:18.855 Cannot find device "nvmf_init_br" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:18.855 Cannot find device "nvmf_init_br2" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:18.855 Cannot find device "nvmf_tgt_br" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.855 Cannot find device "nvmf_tgt_br2" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:18.855 Cannot find device "nvmf_init_br" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:18.855 Cannot find device "nvmf_init_br2" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:18.855 Cannot find device "nvmf_tgt_br" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:18.855 Cannot find device "nvmf_tgt_br2" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:18.855 Cannot find device "nvmf_br" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:18.855 Cannot find device "nvmf_init_if" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:18.855 Cannot find device "nvmf_init_if2" 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:18.855 13:18:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.855 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.114 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:19.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:19.115 00:08:19.115 --- 10.0.0.3 ping statistics --- 00:08:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.115 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:19.115 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:19.115 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:08:19.115 00:08:19.115 --- 10.0.0.4 ping statistics --- 00:08:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.115 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:19.115 00:08:19.115 --- 10.0.0.1 ping statistics --- 00:08:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.115 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:19.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:19.115 00:08:19.115 --- 10.0.0.2 ping statistics --- 00:08:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.115 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65306 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65306 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65306 ']' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.115 13:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.374 [2024-11-17 13:18:08.347333] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:19.374 [2024-11-17 13:18:08.347441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.374 [2024-11-17 13:18:08.496253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.374 [2024-11-17 13:18:08.539287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.374 [2024-11-17 13:18:08.539328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.374 [2024-11-17 13:18:08.539338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.374 [2024-11-17 13:18:08.539345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.374 [2024-11-17 13:18:08.539351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.374 [2024-11-17 13:18:08.539661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.374 [2024-11-17 13:18:08.590010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 [2024-11-17 13:18:09.329834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 [2024-11-17 13:18:09.345957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 malloc0 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:20.309 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.310 { 00:08:20.310 "params": { 00:08:20.310 "name": "Nvme$subsystem", 00:08:20.310 "trtype": "$TEST_TRANSPORT", 00:08:20.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.310 "adrfam": "ipv4", 00:08:20.310 "trsvcid": "$NVMF_PORT", 00:08:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.310 "hdgst": ${hdgst:-false}, 00:08:20.310 "ddgst": ${ddgst:-false} 00:08:20.310 }, 00:08:20.310 "method": "bdev_nvme_attach_controller" 00:08:20.310 } 00:08:20.310 EOF 00:08:20.310 )") 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:20.310 13:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.310 "params": { 00:08:20.310 "name": "Nvme1", 00:08:20.310 "trtype": "tcp", 00:08:20.310 "traddr": "10.0.0.3", 00:08:20.310 "adrfam": "ipv4", 00:08:20.310 "trsvcid": "4420", 00:08:20.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.310 "hdgst": false, 00:08:20.310 "ddgst": false 00:08:20.310 }, 00:08:20.310 "method": "bdev_nvme_attach_controller" 00:08:20.310 }' 00:08:20.310 [2024-11-17 13:18:09.447695] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:20.310 [2024-11-17 13:18:09.447845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65339 ] 00:08:20.569 [2024-11-17 13:18:09.602563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.569 [2024-11-17 13:18:09.654975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.569 [2024-11-17 13:18:09.721701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.828 Running I/O for 10 seconds... 00:08:22.703 6901.00 IOPS, 53.91 MiB/s [2024-11-17T13:18:12.864Z] 7034.00 IOPS, 54.95 MiB/s [2024-11-17T13:18:14.243Z] 7005.67 IOPS, 54.73 MiB/s [2024-11-17T13:18:15.180Z] 7056.50 IOPS, 55.13 MiB/s [2024-11-17T13:18:16.116Z] 7051.20 IOPS, 55.09 MiB/s [2024-11-17T13:18:17.053Z] 7096.33 IOPS, 55.44 MiB/s [2024-11-17T13:18:18.006Z] 7105.00 IOPS, 55.51 MiB/s [2024-11-17T13:18:18.942Z] 7111.75 IOPS, 55.56 MiB/s [2024-11-17T13:18:19.880Z] 7115.11 IOPS, 55.59 MiB/s [2024-11-17T13:18:19.880Z] 7118.90 IOPS, 55.62 MiB/s 00:08:30.656 Latency(us) 00:08:30.656 [2024-11-17T13:18:19.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:30.656 Verification LBA range: start 0x0 length 0x1000 00:08:30.656 Nvme1n1 : 10.01 7122.10 55.64 0.00 0.00 17918.57 2576.76 26810.18 00:08:30.656 [2024-11-17T13:18:19.880Z] =================================================================================================================== 00:08:30.656 [2024-11-17T13:18:19.880Z] Total : 7122.10 55.64 0.00 0.00 17918.57 2576.76 26810.18 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65456 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.915 { 00:08:30.915 "params": { 00:08:30.915 "name": "Nvme$subsystem", 00:08:30.915 "trtype": "$TEST_TRANSPORT", 00:08:30.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.915 "adrfam": "ipv4", 00:08:30.915 "trsvcid": "$NVMF_PORT", 00:08:30.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.915 "hdgst": ${hdgst:-false}, 00:08:30.915 "ddgst": ${ddgst:-false} 00:08:30.915 }, 00:08:30.915 "method": "bdev_nvme_attach_controller" 00:08:30.915 } 00:08:30.915 EOF 00:08:30.915 )") 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.915 [2024-11-17 13:18:20.051851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.915 [2024-11-17 13:18:20.051909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.915 13:18:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.915 "params": { 00:08:30.915 "name": "Nvme1", 00:08:30.915 "trtype": "tcp", 00:08:30.915 "traddr": "10.0.0.3", 00:08:30.915 "adrfam": "ipv4", 00:08:30.915 "trsvcid": "4420", 00:08:30.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.915 "hdgst": false, 00:08:30.915 "ddgst": false 00:08:30.915 }, 00:08:30.915 "method": "bdev_nvme_attach_controller" 00:08:30.915 }' 00:08:30.915 [2024-11-17 13:18:20.063741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.063815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 [2024-11-17 13:18:20.075739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.075812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 [2024-11-17 13:18:20.087743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.087820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 [2024-11-17 13:18:20.099744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.099817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 [2024-11-17 13:18:20.101035] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:30.915 [2024-11-17 13:18:20.101128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65456 ] 00:08:30.915 [2024-11-17 13:18:20.111741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.111830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.915 [2024-11-17 13:18:20.123744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.915 [2024-11-17 13:18:20.123831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.135749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.135804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.147748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.147819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.159751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.159808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.171805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.171844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.183759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.183874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.195761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.195864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.207832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.207857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.219803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.219831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.231807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.231837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.243805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.243845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.249676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.174 [2024-11-17 13:18:20.255824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.255851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.267871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.267912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.279837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.279862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.291855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.291905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.303192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.174 [2024-11-17 13:18:20.303870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.303895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.315845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.315886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.327870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.327914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.339859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.339903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.351860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.351907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.363858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.363904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.367363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.174 [2024-11-17 13:18:20.375875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.375918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.174 [2024-11-17 13:18:20.387865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.174 [2024-11-17 13:18:20.387911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.399845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.399893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.411875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.411915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.423918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.423964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.435947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.435992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.447952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.447997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.459959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.460003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.471963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.472006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.483982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.484028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 Running I/O for 5 seconds... 00:08:31.431 [2024-11-17 13:18:20.499651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.499698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.515685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.515731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.531564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.531610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.549607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.549652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.565307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.565352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.583234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.583279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.598346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.598392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.615129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.615173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.630100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.630146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.431 [2024-11-17 13:18:20.646439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.431 [2024-11-17 13:18:20.646484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.664022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.664069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.679596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.679640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.697138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.697198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.713857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.713902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.730537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.730582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.746603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.746649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.763731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.763810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.781242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.781286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.798519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.798549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.814104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.814149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.831527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.831572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.847966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.847997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.865626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.865679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.881998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.882042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.689 [2024-11-17 13:18:20.899518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.689 [2024-11-17 13:18:20.899565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.916750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.916806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.932541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.932587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.944086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.944148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.960192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.960237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.977039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.977083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:20.993835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:20.993879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.010834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.010878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.027876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.027921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.044940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.044984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.061727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.061796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.079284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.079329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.095542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.095587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.112907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.112951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.129839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.129885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.146059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.146104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.948 [2024-11-17 13:18:21.162660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.948 [2024-11-17 13:18:21.162705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.180059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.180106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.196550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.196595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.214023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.214069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.230003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.230050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.244415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.244459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.260813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.260872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.275379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.275423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.290802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.290843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.300257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.300302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.315872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.315904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.333132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.333179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.348605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.348637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.366292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.366338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.382811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.382867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.399248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.399293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.207 [2024-11-17 13:18:21.416148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.207 [2024-11-17 13:18:21.416193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.432776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.432851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.450657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.450702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.466164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.466209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.483186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.483232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 13489.00 IOPS, 105.38 MiB/s [2024-11-17T13:18:21.690Z] [2024-11-17 13:18:21.499741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.499832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.516546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.516591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.533466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.533510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.550432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.550477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.567283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.567328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.583714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.583760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.600150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.600195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.617612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.617673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.466 [2024-11-17 13:18:21.633030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.466 [2024-11-17 13:18:21.633076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.467 [2024-11-17 13:18:21.643682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.467 [2024-11-17 13:18:21.643726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.467 [2024-11-17 13:18:21.659827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.467 [2024-11-17 13:18:21.659872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.467 [2024-11-17 13:18:21.676754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.467 [2024-11-17 13:18:21.676829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.694062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.694108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.709327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.709372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.726888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.726932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.743530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.743574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.760880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.760926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.776859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.776903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.787697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.787743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.803570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.803616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.821094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.821139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.837566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.837595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.855027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.855073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.870495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.870540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.887296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.887341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.903299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.903345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.918476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.918520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.929922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.929967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.725 [2024-11-17 13:18:21.944975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.725 [2024-11-17 13:18:21.945019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:21.962174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:21.962218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:21.979210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:21.979254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:21.996061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:21.996109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.013810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.013854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.029714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.029758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.046955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.046999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.063829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.063876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.080933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.080977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.097432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.097477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.113817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.113862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.130676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.130721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.147648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.147693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.165168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.165215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.181691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.181736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.984 [2024-11-17 13:18:22.197834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.984 [2024-11-17 13:18:22.197878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.215108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.215152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.230842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.230885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.241457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.241501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.257116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.257175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.273591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.273637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.290477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.290521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.308527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.308573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.323583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.323626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.341624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.341670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.356500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.356545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.374662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.374708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.390265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.390310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.406615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.406659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.422613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.422658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.440863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.440908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.243 [2024-11-17 13:18:22.454520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.243 [2024-11-17 13:18:22.454567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.470170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.470213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.486890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.486935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 13551.50 IOPS, 105.87 MiB/s [2024-11-17T13:18:22.726Z] [2024-11-17 13:18:22.504623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.504667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.519655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.519704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.530632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.530677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.546760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.546816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.562980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.563024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.579485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.579529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.596451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.596498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.613263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.613309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.630159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.630218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.646339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.646384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.663742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.663822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.680337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.680382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.697063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.697109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.502 [2024-11-17 13:18:22.714075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.502 [2024-11-17 13:18:22.714120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.730108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.730154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.748017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.748064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.765074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.765120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.781271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.781316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.798918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.798962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.814029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.814075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.828678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.828723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.839929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.839963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.855258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.855303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.872539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.872584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.889529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.889573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.906398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.906444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.923818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.923863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.939001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.939045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.950356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.950400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.762 [2024-11-17 13:18:22.966218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.762 [2024-11-17 13:18:22.966262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:22.983269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:22.983313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.003544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.003591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.024264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.024309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.041142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.041187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.058441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.058486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.073868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.073913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.090925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.090968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.106236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.106281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.122572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.122618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.138592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.138637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.156336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.156381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.171598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.171642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.182459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.182504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.198306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.198351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.215464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.215510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.022 [2024-11-17 13:18:23.230691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.022 [2024-11-17 13:18:23.230736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.247456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.247485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.264046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.264092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.281083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.281129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.297481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.297526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.314701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.314746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.331750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.331812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.348317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.348361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.364373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.364415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.375546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.375591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.391842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.391888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.408712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.408757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.425676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.425722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.441441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.441486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.450977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.451021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.466546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.466590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 [2024-11-17 13:18:23.478647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.478692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.282 13565.00 IOPS, 105.98 MiB/s [2024-11-17T13:18:23.506Z] [2024-11-17 13:18:23.495195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.282 [2024-11-17 13:18:23.495240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.541 [2024-11-17 13:18:23.510513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.541 [2024-11-17 13:18:23.510559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.541 [2024-11-17 13:18:23.521943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.541 [2024-11-17 13:18:23.521988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.541 [2024-11-17 13:18:23.537826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.541 [2024-11-17 13:18:23.537871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.541 [2024-11-17 13:18:23.554368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.541 [2024-11-17 13:18:23.554413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.541 [2024-11-17 13:18:23.570813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.570857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.586558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.586602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.597250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.597294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.612509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.612554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.629975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.630021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.646862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.646907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.664199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.664244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.679612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.679657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.697390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.697434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.712956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.713001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.730070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.730116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.542 [2024-11-17 13:18:23.746430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.542 [2024-11-17 13:18:23.746475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.764296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.764350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.779510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.779556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.796373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.796403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.813582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.813627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.830019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.830064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.847476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.847521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.864498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.864543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.881538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.881583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.898227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.898272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.915708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.915753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.932507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.932553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.949384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.949429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.966351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.966395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:23.983423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:23.983468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:24.000143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:24.000190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.801 [2024-11-17 13:18:24.016256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.801 [2024-11-17 13:18:24.016300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.033376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.033422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.050122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.050182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.067210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.067255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.083501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.083546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.099918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.099964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.117030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.117076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.134051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.134097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.150912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.150956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.167554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.167599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.184620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.184665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.201571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.201616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.218111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.218170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.235002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.235048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.252162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.252221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.060 [2024-11-17 13:18:24.268693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.060 [2024-11-17 13:18:24.268738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.284381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.284424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.299516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.299560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.316667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.316711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.333704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.333749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.350720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.350766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.367196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.367240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.383855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.383904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.401573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.401618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.417022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.417068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.434747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.434820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.449072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.449118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.465170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.465215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.481328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.481373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 13589.50 IOPS, 106.17 MiB/s [2024-11-17T13:18:24.543Z] [2024-11-17 13:18:24.498880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.498926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.514207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.514253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.319 [2024-11-17 13:18:24.530808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.319 [2024-11-17 13:18:24.530864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.546808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.546864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.563973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.564020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.580142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.580174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.597579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.597624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.612708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.612753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.624120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.624181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.640088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.640135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.657276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.657322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.673598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.673643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.691192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.691237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.706342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.706387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.720834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.720879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.737198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.737242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.752567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.752612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.770639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.770684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.785983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.786029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.578 [2024-11-17 13:18:24.797550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.578 [2024-11-17 13:18:24.797595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.813163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.813208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.830097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.830143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.847400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.847444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.863757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.863854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.881257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.881302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.898256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.898300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.915438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.915483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.931962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.932008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.948328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.948372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.965933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.965978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.982098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.982143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:24.999847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:24.999893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:25.015110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:25.015170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:25.032825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:25.032871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.837 [2024-11-17 13:18:25.050176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.837 [2024-11-17 13:18:25.050222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.067376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.067421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.084356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.084401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.100031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.100063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.111022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.111068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.125978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.126023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.143598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.143644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.159102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.159148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.176576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.176621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.192949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.192994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.209995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.210041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.227002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.227047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.243229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.243274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.260350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.260394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.277058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.277104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.293254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.293299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.096 [2024-11-17 13:18:25.310148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.096 [2024-11-17 13:18:25.310192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.355 [2024-11-17 13:18:25.326859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.355 [2024-11-17 13:18:25.326904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.355 [2024-11-17 13:18:25.343286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.355 [2024-11-17 13:18:25.343333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.359383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.359428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.377366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.377413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.392212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.392273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.408361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.408406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.425197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.425242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.441312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.441358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.458617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.458662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.474031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.474077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 13596.00 IOPS, 106.22 MiB/s [2024-11-17T13:18:25.580Z] [2024-11-17 13:18:25.491364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.491407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 00:08:36.356 Latency(us) 00:08:36.356 [2024-11-17T13:18:25.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.356 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:36.356 Nvme1n1 : 5.01 13596.82 106.23 0.00 0.00 9403.17 3902.37 18945.86 00:08:36.356 [2024-11-17T13:18:25.580Z] =================================================================================================================== 00:08:36.356 [2024-11-17T13:18:25.580Z] Total : 13596.82 106.23 0.00 0.00 9403.17 3902.37 18945.86 00:08:36.356 [2024-11-17 13:18:25.502550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.502592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.514542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.514585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.526563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.526614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.538569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.538619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.550571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.550619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.562574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.562624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.356 [2024-11-17 13:18:25.574585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.356 [2024-11-17 13:18:25.574632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.586591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.586640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.598591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.598641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.610594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.610645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.622594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.622643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.634585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.634632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.646579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.646622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.658594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.658640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.670588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.670623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 [2024-11-17 13:18:25.682576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.615 [2024-11-17 13:18:25.682614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.615 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65456) - No such process 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65456 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.615 delay0 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.615 13:18:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:36.874 [2024-11-17 13:18:25.888474] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:44.995 Initializing NVMe Controllers 00:08:44.995 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.995 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:44.995 Initialization complete. Launching workers. 00:08:44.995 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 15105 00:08:44.995 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15340, failed to submit 60 00:08:44.995 success 15197, unsuccessful 143, failed 0 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.995 13:18:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.995 rmmod nvme_tcp 00:08:44.995 rmmod nvme_fabrics 00:08:44.995 rmmod nvme_keyring 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65306 ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65306 ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.995 killing process with pid 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65306' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65306 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:44.995 00:08:44.995 real 0m25.824s 00:08:44.995 user 0m41.398s 00:08:44.995 sys 0m7.481s 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.995 ************************************ 00:08:44.995 END TEST nvmf_zcopy 00:08:44.995 ************************************ 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.995 ************************************ 00:08:44.995 START TEST nvmf_nmic 00:08:44.995 ************************************ 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:44.995 * Looking for test storage... 00:08:44.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:44.995 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.996 --rc genhtml_branch_coverage=1 00:08:44.996 --rc genhtml_function_coverage=1 00:08:44.996 --rc genhtml_legend=1 00:08:44.996 --rc geninfo_all_blocks=1 00:08:44.996 --rc geninfo_unexecuted_blocks=1 00:08:44.996 00:08:44.996 ' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.996 --rc genhtml_branch_coverage=1 00:08:44.996 --rc genhtml_function_coverage=1 00:08:44.996 --rc genhtml_legend=1 00:08:44.996 --rc geninfo_all_blocks=1 00:08:44.996 --rc geninfo_unexecuted_blocks=1 00:08:44.996 00:08:44.996 ' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.996 --rc genhtml_branch_coverage=1 00:08:44.996 --rc genhtml_function_coverage=1 00:08:44.996 --rc genhtml_legend=1 00:08:44.996 --rc geninfo_all_blocks=1 00:08:44.996 --rc geninfo_unexecuted_blocks=1 00:08:44.996 00:08:44.996 ' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.996 --rc genhtml_branch_coverage=1 00:08:44.996 --rc genhtml_function_coverage=1 00:08:44.996 --rc genhtml_legend=1 00:08:44.996 --rc geninfo_all_blocks=1 00:08:44.996 --rc geninfo_unexecuted_blocks=1 00:08:44.996 00:08:44.996 ' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.996 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:44.997 Cannot find device "nvmf_init_br" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:44.997 Cannot find device "nvmf_init_br2" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:44.997 Cannot find device "nvmf_tgt_br" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.997 Cannot find device "nvmf_tgt_br2" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:44.997 Cannot find device "nvmf_init_br" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:44.997 Cannot find device "nvmf_init_br2" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:44.997 Cannot find device "nvmf_tgt_br" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:44.997 Cannot find device "nvmf_tgt_br2" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:44.997 Cannot find device "nvmf_br" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:44.997 Cannot find device "nvmf_init_if" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:44.997 Cannot find device "nvmf_init_if2" 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.997 13:18:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:44.997 00:08:44.997 --- 10.0.0.3 ping statistics --- 00:08:44.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.997 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:08:44.997 00:08:44.997 --- 10.0.0.4 ping statistics --- 00:08:44.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.997 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:44.997 00:08:44.997 --- 10.0.0.1 ping statistics --- 00:08:44.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.997 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:44.997 00:08:44.997 --- 10.0.0.2 ping statistics --- 00:08:44.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.997 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65844 00:08:44.997 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65844 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65844 ']' 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.998 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.998 [2024-11-17 13:18:34.207276] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:44.998 [2024-11-17 13:18:34.207369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.257 [2024-11-17 13:18:34.360960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.257 [2024-11-17 13:18:34.416148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.257 [2024-11-17 13:18:34.416217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.257 [2024-11-17 13:18:34.416231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.257 [2024-11-17 13:18:34.416241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.257 [2024-11-17 13:18:34.416250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.257 [2024-11-17 13:18:34.417479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.257 [2024-11-17 13:18:34.417620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.257 [2024-11-17 13:18:34.417731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.257 [2024-11-17 13:18:34.417732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.257 [2024-11-17 13:18:34.476602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.516 [2024-11-17 13:18:34.595365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.516 Malloc0 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:45.516 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 [2024-11-17 13:18:34.667418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.517 test case1: single bdev can't be used in multiple subsystems 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 [2024-11-17 13:18:34.691223] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:45.517 [2024-11-17 13:18:34.691278] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:45.517 [2024-11-17 13:18:34.691292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.517 request: 00:08:45.517 { 00:08:45.517 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:45.517 "namespace": { 00:08:45.517 "bdev_name": "Malloc0", 00:08:45.517 "no_auto_visible": false 00:08:45.517 }, 00:08:45.517 "method": "nvmf_subsystem_add_ns", 00:08:45.517 "req_id": 1 00:08:45.517 } 00:08:45.517 Got JSON-RPC error response 00:08:45.517 response: 00:08:45.517 { 00:08:45.517 "code": -32602, 00:08:45.517 "message": "Invalid parameters" 00:08:45.517 } 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:45.517 Adding namespace failed - expected result. 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:45.517 test case2: host connect to nvmf target in multiple paths 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:45.517 [2024-11-17 13:18:34.703375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.517 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:45.776 13:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:48.311 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:48.311 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:48.311 13:18:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.311 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:48.311 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.311 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:48.311 13:18:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:48.311 [global] 00:08:48.311 thread=1 00:08:48.311 invalidate=1 00:08:48.311 rw=write 00:08:48.311 time_based=1 00:08:48.311 runtime=1 00:08:48.311 ioengine=libaio 00:08:48.311 direct=1 00:08:48.311 bs=4096 00:08:48.311 iodepth=1 00:08:48.311 norandommap=0 00:08:48.311 numjobs=1 00:08:48.311 00:08:48.311 verify_dump=1 00:08:48.311 verify_backlog=512 00:08:48.311 verify_state_save=0 00:08:48.311 do_verify=1 00:08:48.311 verify=crc32c-intel 00:08:48.311 [job0] 00:08:48.311 filename=/dev/nvme0n1 00:08:48.311 Could not set queue depth (nvme0n1) 00:08:48.311 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.311 fio-3.35 00:08:48.311 Starting 1 thread 00:08:49.255 00:08:49.255 job0: (groupid=0, jobs=1): err= 0: pid=65928: Sun Nov 17 13:18:38 2024 00:08:49.255 read: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec) 00:08:49.255 slat (usec): min=10, max=101, avg=13.52, stdev= 4.69 00:08:49.255 clat (usec): min=122, max=631, avg=185.24, stdev=57.93 00:08:49.255 lat (usec): min=133, max=645, avg=198.76, stdev=58.62 00:08:49.255 clat percentiles (usec): 00:08:49.255 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:08:49.255 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 174], 00:08:49.255 | 70.00th=[ 190], 80.00th=[ 217], 90.00th=[ 277], 95.00th=[ 306], 00:08:49.255 | 99.00th=[ 371], 99.50th=[ 465], 99.90th=[ 562], 99.95th=[ 603], 00:08:49.255 | 99.99th=[ 635] 00:08:49.255 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:49.255 slat (usec): min=12, max=108, avg=20.12, stdev= 7.20 00:08:49.255 clat (usec): min=77, max=369, avg=116.22, stdev=36.83 00:08:49.255 lat (usec): min=93, max=389, avg=136.33, stdev=38.07 00:08:49.255 clat percentiles (usec): 00:08:49.255 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 90], 00:08:49.255 | 30.00th=[ 94], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 109], 00:08:49.255 | 70.00th=[ 120], 80.00th=[ 143], 90.00th=[ 176], 95.00th=[ 194], 00:08:49.255 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 285], 99.95th=[ 359], 00:08:49.255 | 99.99th=[ 371] 00:08:49.255 bw ( KiB/s): min=13952, max=13952, per=100.00%, avg=13952.00, stdev= 0.00, samples=1 00:08:49.255 iops : min= 3488, max= 3488, avg=3488.00, stdev= 0.00, samples=1 00:08:49.255 lat (usec) : 100=25.55%, 250=67.24%, 500=7.05%, 750=0.17% 00:08:49.255 cpu : usr=1.70%, sys=8.20%, ctx=5946, majf=0, minf=5 00:08:49.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.255 issued rwts: total=2874,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.255 00:08:49.255 Run status group 0 (all jobs): 00:08:49.255 READ: bw=11.2MiB/s (11.8MB/s), 11.2MiB/s-11.2MiB/s (11.8MB/s-11.8MB/s), io=11.2MiB (11.8MB), run=1001-1001msec 00:08:49.255 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:49.255 00:08:49.255 Disk stats (read/write): 00:08:49.255 nvme0n1: ios=2610/2991, merge=0/0, ticks=491/372, in_queue=863, util=91.38% 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.255 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.255 rmmod nvme_tcp 00:08:49.529 rmmod nvme_fabrics 00:08:49.529 rmmod nvme_keyring 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65844 ']' 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65844 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65844 ']' 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65844 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65844 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.529 killing process with pid 65844 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65844' 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65844 00:08:49.529 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65844 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.788 13:18:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:50.047 00:08:50.047 real 0m5.513s 00:08:50.047 user 0m16.156s 00:08:50.047 sys 0m2.327s 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.047 ************************************ 00:08:50.047 END TEST nvmf_nmic 00:08:50.047 ************************************ 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.047 ************************************ 00:08:50.047 START TEST nvmf_fio_target 00:08:50.047 ************************************ 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.047 * Looking for test storage... 00:08:50.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.047 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.307 --rc genhtml_branch_coverage=1 00:08:50.307 --rc genhtml_function_coverage=1 00:08:50.307 --rc genhtml_legend=1 00:08:50.307 --rc geninfo_all_blocks=1 00:08:50.307 --rc geninfo_unexecuted_blocks=1 00:08:50.307 00:08:50.307 ' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.307 --rc genhtml_branch_coverage=1 00:08:50.307 --rc genhtml_function_coverage=1 00:08:50.307 --rc genhtml_legend=1 00:08:50.307 --rc geninfo_all_blocks=1 00:08:50.307 --rc geninfo_unexecuted_blocks=1 00:08:50.307 00:08:50.307 ' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.307 --rc genhtml_branch_coverage=1 00:08:50.307 --rc genhtml_function_coverage=1 00:08:50.307 --rc genhtml_legend=1 00:08:50.307 --rc geninfo_all_blocks=1 00:08:50.307 --rc geninfo_unexecuted_blocks=1 00:08:50.307 00:08:50.307 ' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.307 --rc genhtml_branch_coverage=1 00:08:50.307 --rc genhtml_function_coverage=1 00:08:50.307 --rc genhtml_legend=1 00:08:50.307 --rc geninfo_all_blocks=1 00:08:50.307 --rc geninfo_unexecuted_blocks=1 00:08:50.307 00:08:50.307 ' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.307 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:50.307 Cannot find device "nvmf_init_br" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:50.308 Cannot find device "nvmf_init_br2" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:50.308 Cannot find device "nvmf_tgt_br" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.308 Cannot find device "nvmf_tgt_br2" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:50.308 Cannot find device "nvmf_init_br" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:50.308 Cannot find device "nvmf_init_br2" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:50.308 Cannot find device "nvmf_tgt_br" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:50.308 Cannot find device "nvmf_tgt_br2" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:50.308 Cannot find device "nvmf_br" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:50.308 Cannot find device "nvmf_init_if" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:50.308 Cannot find device "nvmf_init_if2" 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.308 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:50.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:08:50.567 00:08:50.567 --- 10.0.0.3 ping statistics --- 00:08:50.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.567 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:50.567 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:50.567 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.130 ms 00:08:50.567 00:08:50.567 --- 10.0.0.4 ping statistics --- 00:08:50.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.567 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:50.567 00:08:50.567 --- 10.0.0.1 ping statistics --- 00:08:50.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.567 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:50.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:50.567 00:08:50.567 --- 10.0.0.2 ping statistics --- 00:08:50.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.567 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66167 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66167 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66167 ']' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.567 13:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:50.567 [2024-11-17 13:18:39.775951] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:50.567 [2024-11-17 13:18:39.776039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.826 [2024-11-17 13:18:39.928842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.826 [2024-11-17 13:18:39.982838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.826 [2024-11-17 13:18:39.983162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.826 [2024-11-17 13:18:39.983332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.826 [2024-11-17 13:18:39.983477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.827 [2024-11-17 13:18:39.983527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.827 [2024-11-17 13:18:39.984938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.827 [2024-11-17 13:18:39.985004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.827 [2024-11-17 13:18:39.985219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.827 [2024-11-17 13:18:39.985077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.827 [2024-11-17 13:18:40.041686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.088 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.347 [2024-11-17 13:18:40.364387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.347 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.605 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:51.605 13:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.864 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:51.864 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.122 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:52.122 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.381 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:52.381 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:52.639 13:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.898 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:52.898 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.156 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:53.156 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.415 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:53.415 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:53.673 13:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.932 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:53.932 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.191 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:54.191 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:54.449 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:54.708 [2024-11-17 13:18:43.766047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:54.708 13:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:54.967 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:55.226 13:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:57.759 13:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:57.759 [global] 00:08:57.759 thread=1 00:08:57.759 invalidate=1 00:08:57.759 rw=write 00:08:57.759 time_based=1 00:08:57.759 runtime=1 00:08:57.759 ioengine=libaio 00:08:57.759 direct=1 00:08:57.759 bs=4096 00:08:57.759 iodepth=1 00:08:57.759 norandommap=0 00:08:57.759 numjobs=1 00:08:57.759 00:08:57.759 verify_dump=1 00:08:57.759 verify_backlog=512 00:08:57.759 verify_state_save=0 00:08:57.759 do_verify=1 00:08:57.759 verify=crc32c-intel 00:08:57.759 [job0] 00:08:57.759 filename=/dev/nvme0n1 00:08:57.759 [job1] 00:08:57.759 filename=/dev/nvme0n2 00:08:57.759 [job2] 00:08:57.759 filename=/dev/nvme0n3 00:08:57.759 [job3] 00:08:57.759 filename=/dev/nvme0n4 00:08:57.759 Could not set queue depth (nvme0n1) 00:08:57.759 Could not set queue depth (nvme0n2) 00:08:57.759 Could not set queue depth (nvme0n3) 00:08:57.759 Could not set queue depth (nvme0n4) 00:08:57.759 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.759 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.759 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.759 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:57.759 fio-3.35 00:08:57.759 Starting 4 threads 00:08:58.695 00:08:58.695 job0: (groupid=0, jobs=1): err= 0: pid=66338: Sun Nov 17 13:18:47 2024 00:08:58.695 read: IOPS=2896, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:08:58.695 slat (nsec): min=11040, max=56428, avg=13781.64, stdev=3774.44 00:08:58.695 clat (usec): min=135, max=6109, avg=174.32, stdev=168.75 00:08:58.695 lat (usec): min=147, max=6121, avg=188.11, stdev=169.05 00:08:58.695 clat percentiles (usec): 00:08:58.695 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:08:58.695 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:08:58.695 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 198], 00:08:58.695 | 99.00th=[ 219], 99.50th=[ 281], 99.90th=[ 3752], 99.95th=[ 3851], 00:08:58.696 | 99.99th=[ 6128] 00:08:58.696 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:58.696 slat (nsec): min=13950, max=78215, avg=19978.41, stdev=5090.25 00:08:58.696 clat (usec): min=92, max=299, avg=124.69, stdev=16.00 00:08:58.696 lat (usec): min=109, max=323, avg=144.67, stdev=17.03 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 113], 00:08:58.696 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 127], 00:08:58.696 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 155], 00:08:58.696 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 198], 99.95th=[ 297], 00:08:58.696 | 99.99th=[ 302] 00:08:58.696 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:58.696 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:58.696 lat (usec) : 100=1.06%, 250=98.58%, 500=0.23%, 750=0.03%, 1000=0.02% 00:08:58.696 lat (msec) : 4=0.07%, 10=0.02% 00:08:58.696 cpu : usr=1.60%, sys=8.60%, ctx=5972, majf=0, minf=15 00:08:58.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 issued rwts: total=2899,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.696 job1: (groupid=0, jobs=1): err= 0: pid=66339: Sun Nov 17 13:18:47 2024 00:08:58.696 read: IOPS=1879, BW=7516KiB/s (7697kB/s)(7524KiB/1001msec) 00:08:58.696 slat (usec): min=11, max=109, avg=16.44, stdev= 5.52 00:08:58.696 clat (usec): min=143, max=614, avg=270.16, stdev=27.78 00:08:58.696 lat (usec): min=155, max=633, avg=286.60, stdev=27.11 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 219], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 251], 00:08:58.696 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:08:58.696 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:08:58.696 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 519], 99.95th=[ 611], 00:08:58.696 | 99.99th=[ 611] 00:08:58.696 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.696 slat (nsec): min=16064, max=94389, avg=23253.07, stdev=6490.65 00:08:58.696 clat (usec): min=100, max=744, avg=197.93, stdev=23.76 00:08:58.696 lat (usec): min=120, max=768, avg=221.18, stdev=24.21 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 147], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:08:58.696 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:08:58.696 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 235], 00:08:58.696 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 293], 00:08:58.696 | 99.99th=[ 742] 00:08:58.696 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.696 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.696 lat (usec) : 250=60.45%, 500=39.45%, 750=0.10% 00:08:58.696 cpu : usr=1.50%, sys=6.40%, ctx=3932, majf=0, minf=7 00:08:58.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 issued rwts: total=1881,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.696 job2: (groupid=0, jobs=1): err= 0: pid=66340: Sun Nov 17 13:18:47 2024 00:08:58.696 read: IOPS=1870, BW=7481KiB/s (7660kB/s)(7488KiB/1001msec) 00:08:58.696 slat (nsec): min=11608, max=49661, avg=14142.42, stdev=3612.51 00:08:58.696 clat (usec): min=175, max=1608, avg=273.71, stdev=41.06 00:08:58.696 lat (usec): min=190, max=1621, avg=287.86, stdev=41.04 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:08:58.696 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:08:58.696 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:08:58.696 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 725], 99.95th=[ 1614], 00:08:58.696 | 99.99th=[ 1614] 00:08:58.696 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:58.696 slat (nsec): min=16078, max=96427, avg=20746.57, stdev=5649.21 00:08:58.696 clat (usec): min=116, max=745, avg=200.95, stdev=23.24 00:08:58.696 lat (usec): min=134, max=764, avg=221.69, stdev=24.18 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:08:58.696 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:08:58.696 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 237], 00:08:58.696 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 343], 99.95th=[ 379], 00:08:58.696 | 99.99th=[ 742] 00:08:58.696 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:58.696 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:58.696 lat (usec) : 250=57.86%, 500=42.02%, 750=0.10% 00:08:58.696 lat (msec) : 2=0.03% 00:08:58.696 cpu : usr=1.10%, sys=5.80%, ctx=3920, majf=0, minf=11 00:08:58.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 issued rwts: total=1872,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.696 job3: (groupid=0, jobs=1): err= 0: pid=66341: Sun Nov 17 13:18:47 2024 00:08:58.696 read: IOPS=2844, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:08:58.696 slat (nsec): min=10918, max=45646, avg=13336.10, stdev=3159.92 00:08:58.696 clat (usec): min=142, max=1901, avg=171.84, stdev=36.63 00:08:58.696 lat (usec): min=153, max=1917, avg=185.17, stdev=36.91 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:08:58.696 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:58.696 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:08:58.696 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 314], 99.95th=[ 469], 00:08:58.696 | 99.99th=[ 1909] 00:08:58.696 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:58.696 slat (nsec): min=13114, max=85650, avg=19525.86, stdev=5160.86 00:08:58.696 clat (usec): min=99, max=216, avg=131.14, stdev=14.88 00:08:58.696 lat (usec): min=116, max=271, avg=150.67, stdev=16.15 00:08:58.696 clat percentiles (usec): 00:08:58.696 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:08:58.696 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:08:58.696 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 159], 00:08:58.696 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 204], 00:08:58.696 | 99.99th=[ 217] 00:08:58.696 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:08:58.696 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:58.696 lat (usec) : 100=0.02%, 250=99.92%, 500=0.05% 00:08:58.696 lat (msec) : 2=0.02% 00:08:58.696 cpu : usr=3.40%, sys=6.60%, ctx=5919, majf=0, minf=3 00:08:58.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.696 issued rwts: total=2847,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.696 00:08:58.696 Run status group 0 (all jobs): 00:08:58.696 READ: bw=37.1MiB/s (38.9MB/s), 7481KiB/s-11.3MiB/s (7660kB/s-11.9MB/s), io=37.1MiB (38.9MB), run=1001-1001msec 00:08:58.696 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:08:58.696 00:08:58.696 Disk stats (read/write): 00:08:58.696 nvme0n1: ios=2577/2560, merge=0/0, ticks=456/347, in_queue=803, util=86.97% 00:08:58.696 nvme0n2: ios=1575/1890, merge=0/0, ticks=432/382, in_queue=814, util=88.25% 00:08:58.696 nvme0n3: ios=1536/1872, merge=0/0, ticks=419/390, in_queue=809, util=89.14% 00:08:58.696 nvme0n4: ios=2498/2560, merge=0/0, ticks=444/348, in_queue=792, util=89.70% 00:08:58.696 13:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:58.696 [global] 00:08:58.696 thread=1 00:08:58.696 invalidate=1 00:08:58.696 rw=randwrite 00:08:58.696 time_based=1 00:08:58.696 runtime=1 00:08:58.696 ioengine=libaio 00:08:58.696 direct=1 00:08:58.696 bs=4096 00:08:58.696 iodepth=1 00:08:58.696 norandommap=0 00:08:58.696 numjobs=1 00:08:58.696 00:08:58.696 verify_dump=1 00:08:58.696 verify_backlog=512 00:08:58.696 verify_state_save=0 00:08:58.696 do_verify=1 00:08:58.696 verify=crc32c-intel 00:08:58.696 [job0] 00:08:58.696 filename=/dev/nvme0n1 00:08:58.696 [job1] 00:08:58.696 filename=/dev/nvme0n2 00:08:58.696 [job2] 00:08:58.696 filename=/dev/nvme0n3 00:08:58.696 [job3] 00:08:58.696 filename=/dev/nvme0n4 00:08:58.696 Could not set queue depth (nvme0n1) 00:08:58.696 Could not set queue depth (nvme0n2) 00:08:58.696 Could not set queue depth (nvme0n3) 00:08:58.696 Could not set queue depth (nvme0n4) 00:08:58.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.955 fio-3.35 00:08:58.955 Starting 4 threads 00:09:00.330 00:09:00.330 job0: (groupid=0, jobs=1): err= 0: pid=66400: Sun Nov 17 13:18:49 2024 00:09:00.330 read: IOPS=3258, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:09:00.330 slat (nsec): min=10786, max=38448, avg=12080.86, stdev=1685.11 00:09:00.330 clat (usec): min=130, max=224, avg=153.92, stdev=10.49 00:09:00.330 lat (usec): min=143, max=236, avg=166.00, stdev=10.63 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:09:00.330 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:09:00.330 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:09:00.330 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 215], 00:09:00.330 | 99.99th=[ 225] 00:09:00.330 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:00.330 slat (nsec): min=12595, max=80158, avg=17367.77, stdev=3440.17 00:09:00.330 clat (usec): min=85, max=174, avg=107.67, stdev=10.18 00:09:00.330 lat (usec): min=102, max=255, avg=125.04, stdev=11.41 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 90], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 99], 00:09:00.330 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:09:00.330 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 122], 95.00th=[ 127], 00:09:00.330 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 151], 99.95th=[ 157], 00:09:00.330 | 99.99th=[ 176] 00:09:00.330 bw ( KiB/s): min=15193, max=15193, per=35.36%, avg=15193.00, stdev= 0.00, samples=1 00:09:00.330 iops : min= 3798, max= 3798, avg=3798.00, stdev= 0.00, samples=1 00:09:00.330 lat (usec) : 100=12.27%, 250=87.73% 00:09:00.330 cpu : usr=2.00%, sys=8.30%, ctx=6846, majf=0, minf=9 00:09:00.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 issued rwts: total=3262,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.330 job1: (groupid=0, jobs=1): err= 0: pid=66401: Sun Nov 17 13:18:49 2024 00:09:00.330 read: IOPS=2011, BW=8048KiB/s (8241kB/s)(8056KiB/1001msec) 00:09:00.330 slat (nsec): min=10957, max=35611, avg=12163.31, stdev=1521.87 00:09:00.330 clat (usec): min=187, max=866, avg=265.61, stdev=29.17 00:09:00.330 lat (usec): min=198, max=878, avg=277.77, stdev=29.33 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:09:00.330 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:09:00.330 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:09:00.330 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 627], 99.95th=[ 857], 00:09:00.330 | 99.99th=[ 865] 00:09:00.330 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:00.330 slat (nsec): min=16412, max=83236, avg=18570.65, stdev=4350.00 00:09:00.330 clat (usec): min=90, max=915, avg=193.23, stdev=26.26 00:09:00.330 lat (usec): min=108, max=934, avg=211.80, stdev=27.13 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 120], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:09:00.330 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:09:00.330 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:09:00.330 | 99.00th=[ 243], 99.50th=[ 326], 99.90th=[ 367], 99.95th=[ 486], 00:09:00.330 | 99.99th=[ 914] 00:09:00.330 bw ( KiB/s): min= 8192, max= 8192, per=19.07%, avg=8192.00, stdev= 0.00, samples=1 00:09:00.330 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:00.330 lat (usec) : 100=0.02%, 250=60.46%, 500=39.39%, 750=0.05%, 1000=0.07% 00:09:00.330 cpu : usr=1.70%, sys=4.80%, ctx=4062, majf=0, minf=9 00:09:00.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 issued rwts: total=2014,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.330 job2: (groupid=0, jobs=1): err= 0: pid=66402: Sun Nov 17 13:18:49 2024 00:09:00.330 read: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:09:00.330 slat (nsec): min=10760, max=56805, avg=13026.92, stdev=2123.36 00:09:00.330 clat (usec): min=143, max=1504, avg=174.95, stdev=30.87 00:09:00.330 lat (usec): min=154, max=1528, avg=187.98, stdev=31.39 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:00.330 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:00.330 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:09:00.330 | 99.00th=[ 215], 99.50th=[ 241], 99.90th=[ 424], 99.95th=[ 498], 00:09:00.330 | 99.99th=[ 1500] 00:09:00.330 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:00.330 slat (nsec): min=12762, max=67754, avg=19048.63, stdev=3679.72 00:09:00.330 clat (usec): min=99, max=545, avg=129.40, stdev=16.65 00:09:00.330 lat (usec): min=115, max=566, avg=148.45, stdev=17.79 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:09:00.330 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:09:00.330 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:09:00.330 | 99.00th=[ 169], 99.50th=[ 186], 99.90th=[ 277], 99.95th=[ 293], 00:09:00.330 | 99.99th=[ 545] 00:09:00.330 bw ( KiB/s): min=12263, max=12263, per=28.54%, avg=12263.00, stdev= 0.00, samples=1 00:09:00.330 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:00.330 lat (usec) : 100=0.03%, 250=99.61%, 500=0.32%, 750=0.02% 00:09:00.330 lat (msec) : 2=0.02% 00:09:00.330 cpu : usr=2.20%, sys=7.60%, ctx=5912, majf=0, minf=9 00:09:00.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 issued rwts: total=2840,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.330 job3: (groupid=0, jobs=1): err= 0: pid=66403: Sun Nov 17 13:18:49 2024 00:09:00.330 read: IOPS=2018, BW=8076KiB/s (8270kB/s)(8084KiB/1001msec) 00:09:00.330 slat (nsec): min=10987, max=36170, avg=12297.24, stdev=1520.39 00:09:00.330 clat (usec): min=151, max=865, avg=264.21, stdev=23.87 00:09:00.330 lat (usec): min=165, max=876, avg=276.50, stdev=23.86 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:09:00.330 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:09:00.330 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:09:00.330 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 363], 99.95th=[ 396], 00:09:00.330 | 99.99th=[ 865] 00:09:00.330 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:00.330 slat (nsec): min=16164, max=81260, avg=18715.12, stdev=4429.26 00:09:00.330 clat (usec): min=108, max=2211, avg=193.42, stdev=48.78 00:09:00.330 lat (usec): min=125, max=2229, avg=212.14, stdev=49.05 00:09:00.330 clat percentiles (usec): 00:09:00.330 | 1.00th=[ 126], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 182], 00:09:00.330 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:09:00.330 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:09:00.330 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 453], 99.95th=[ 603], 00:09:00.330 | 99.99th=[ 2212] 00:09:00.330 bw ( KiB/s): min= 8192, max= 8192, per=19.07%, avg=8192.00, stdev= 0.00, samples=1 00:09:00.330 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:00.330 lat (usec) : 250=61.22%, 500=38.71%, 750=0.02%, 1000=0.02% 00:09:00.330 lat (msec) : 4=0.02% 00:09:00.330 cpu : usr=1.70%, sys=4.90%, ctx=4072, majf=0, minf=17 00:09:00.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:00.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.330 issued rwts: total=2021,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:00.330 00:09:00.330 Run status group 0 (all jobs): 00:09:00.330 READ: bw=39.6MiB/s (41.5MB/s), 8048KiB/s-12.7MiB/s (8241kB/s-13.3MB/s), io=39.6MiB (41.5MB), run=1001-1001msec 00:09:00.330 WRITE: bw=42.0MiB/s (44.0MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=42.0MiB (44.0MB), run=1001-1001msec 00:09:00.330 00:09:00.330 Disk stats (read/write): 00:09:00.330 nvme0n1: ios=2854/3072, merge=0/0, ticks=499/346, in_queue=845, util=88.08% 00:09:00.330 nvme0n2: ios=1549/2028, merge=0/0, ticks=416/403, in_queue=819, util=87.60% 00:09:00.330 nvme0n3: ios=2477/2560, merge=0/0, ticks=441/346, in_queue=787, util=89.21% 00:09:00.330 nvme0n4: ios=1536/2042, merge=0/0, ticks=402/410, in_queue=812, util=89.69% 00:09:00.330 13:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:00.330 [global] 00:09:00.330 thread=1 00:09:00.330 invalidate=1 00:09:00.330 rw=write 00:09:00.330 time_based=1 00:09:00.330 runtime=1 00:09:00.330 ioengine=libaio 00:09:00.330 direct=1 00:09:00.330 bs=4096 00:09:00.330 iodepth=128 00:09:00.330 norandommap=0 00:09:00.330 numjobs=1 00:09:00.330 00:09:00.330 verify_dump=1 00:09:00.330 verify_backlog=512 00:09:00.330 verify_state_save=0 00:09:00.330 do_verify=1 00:09:00.330 verify=crc32c-intel 00:09:00.330 [job0] 00:09:00.330 filename=/dev/nvme0n1 00:09:00.330 [job1] 00:09:00.330 filename=/dev/nvme0n2 00:09:00.330 [job2] 00:09:00.330 filename=/dev/nvme0n3 00:09:00.330 [job3] 00:09:00.330 filename=/dev/nvme0n4 00:09:00.330 Could not set queue depth (nvme0n1) 00:09:00.330 Could not set queue depth (nvme0n2) 00:09:00.330 Could not set queue depth (nvme0n3) 00:09:00.330 Could not set queue depth (nvme0n4) 00:09:00.330 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.330 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.330 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.330 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:00.330 fio-3.35 00:09:00.330 Starting 4 threads 00:09:01.709 00:09:01.709 job0: (groupid=0, jobs=1): err= 0: pid=66460: Sun Nov 17 13:18:50 2024 00:09:01.709 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:01.709 slat (usec): min=4, max=4325, avg=87.56, stdev=378.53 00:09:01.709 clat (usec): min=5822, max=16119, avg=11697.58, stdev=963.22 00:09:01.709 lat (usec): min=5836, max=16152, avg=11785.13, stdev=972.00 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:09:01.709 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:09:01.709 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[13042], 00:09:01.709 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15139], 99.95th=[15270], 00:09:01.709 | 99.99th=[16057] 00:09:01.709 write: IOPS=5641, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:09:01.709 slat (usec): min=11, max=4991, avg=82.14, stdev=465.79 00:09:01.709 clat (usec): min=508, max=16057, avg=10760.64, stdev=1081.89 00:09:01.709 lat (usec): min=2334, max=16103, avg=10842.78, stdev=1161.75 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:09:01.709 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10683], 60.00th=[10814], 00:09:01.709 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:09:01.709 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15533], 99.95th=[15664], 00:09:01.709 | 99.99th=[16057] 00:09:01.709 bw ( KiB/s): min=21352, max=23704, per=34.76%, avg=22528.00, stdev=1663.12, samples=2 00:09:01.709 iops : min= 5338, max= 5926, avg=5632.00, stdev=415.78, samples=2 00:09:01.709 lat (usec) : 750=0.01% 00:09:01.709 lat (msec) : 4=0.18%, 10=6.52%, 20=93.29% 00:09:01.709 cpu : usr=5.39%, sys=14.39%, ctx=349, majf=0, minf=11 00:09:01.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:01.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.709 issued rwts: total=5632,5653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.709 job1: (groupid=0, jobs=1): err= 0: pid=66461: Sun Nov 17 13:18:50 2024 00:09:01.709 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:01.709 slat (usec): min=4, max=7501, avg=194.08, stdev=716.65 00:09:01.709 clat (usec): min=18045, max=32973, avg=24314.63, stdev=2231.31 00:09:01.709 lat (usec): min=18523, max=33026, avg=24508.71, stdev=2228.73 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[19530], 5.00th=[20579], 10.00th=[21365], 20.00th=[22676], 00:09:01.709 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:09:01.709 | 70.00th=[25035], 80.00th=[25822], 90.00th=[27132], 95.00th=[28705], 00:09:01.709 | 99.00th=[30016], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:09:01.709 | 99.99th=[32900] 00:09:01.709 write: IOPS=2657, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1005msec); 0 zone resets 00:09:01.709 slat (usec): min=7, max=7091, avg=180.62, stdev=659.75 00:09:01.709 clat (usec): min=4817, max=37132, avg=23942.82, stdev=4673.38 00:09:01.709 lat (usec): min=5103, max=37155, avg=24123.43, stdev=4684.42 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[ 7570], 5.00th=[17171], 10.00th=[18220], 20.00th=[20841], 00:09:01.709 | 30.00th=[22152], 40.00th=[23200], 50.00th=[24773], 60.00th=[25297], 00:09:01.709 | 70.00th=[25822], 80.00th=[26346], 90.00th=[28705], 95.00th=[31589], 00:09:01.709 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:09:01.709 | 99.99th=[36963] 00:09:01.709 bw ( KiB/s): min= 9008, max=11472, per=15.80%, avg=10240.00, stdev=1742.31, samples=2 00:09:01.709 iops : min= 2252, max= 2868, avg=2560.00, stdev=435.58, samples=2 00:09:01.709 lat (msec) : 10=0.59%, 20=9.08%, 50=90.33% 00:09:01.709 cpu : usr=2.39%, sys=7.97%, ctx=783, majf=0, minf=17 00:09:01.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:01.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.709 issued rwts: total=2560,2671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.709 job2: (groupid=0, jobs=1): err= 0: pid=66462: Sun Nov 17 13:18:50 2024 00:09:01.709 read: IOPS=4882, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:09:01.709 slat (usec): min=7, max=3493, avg=97.19, stdev=457.29 00:09:01.709 clat (usec): min=228, max=14448, avg=12788.84, stdev=1021.77 00:09:01.709 lat (usec): min=3722, max=14462, avg=12886.03, stdev=913.94 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[ 7570], 5.00th=[11207], 10.00th=[12387], 20.00th=[12649], 00:09:01.709 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:09:01.709 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:09:01.709 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:09:01.709 | 99.99th=[14484] 00:09:01.709 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:01.709 slat (usec): min=8, max=2943, avg=95.01, stdev=409.63 00:09:01.709 clat (usec): min=9480, max=13402, avg=12490.20, stdev=528.07 00:09:01.709 lat (usec): min=10724, max=13420, avg=12585.20, stdev=330.60 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[10028], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:09:01.709 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:09:01.709 | 70.00th=[12780], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:09:01.709 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:09:01.709 | 99.99th=[13435] 00:09:01.709 bw ( KiB/s): min=20480, max=20521, per=31.63%, avg=20500.50, stdev=28.99, samples=2 00:09:01.709 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:01.709 lat (usec) : 250=0.01% 00:09:01.709 lat (msec) : 4=0.11%, 10=1.05%, 20=98.83% 00:09:01.709 cpu : usr=4.99%, sys=13.17%, ctx=314, majf=0, minf=17 00:09:01.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:01.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.709 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.709 job3: (groupid=0, jobs=1): err= 0: pid=66463: Sun Nov 17 13:18:50 2024 00:09:01.709 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:09:01.709 slat (usec): min=5, max=7682, avg=194.18, stdev=730.06 00:09:01.709 clat (usec): min=15467, max=33945, avg=24673.50, stdev=3120.20 00:09:01.709 lat (usec): min=15489, max=33959, avg=24867.68, stdev=3136.02 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[17171], 5.00th=[19530], 10.00th=[20841], 20.00th=[22152], 00:09:01.709 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24773], 00:09:01.709 | 70.00th=[25822], 80.00th=[27132], 90.00th=[28705], 95.00th=[30540], 00:09:01.709 | 99.00th=[32113], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:01.709 | 99.99th=[33817] 00:09:01.709 write: IOPS=2854, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1007msec); 0 zone resets 00:09:01.709 slat (usec): min=9, max=7795, avg=167.87, stdev=619.36 00:09:01.709 clat (usec): min=5329, max=34756, avg=22217.32, stdev=4276.84 00:09:01.709 lat (usec): min=6472, max=34781, avg=22385.19, stdev=4290.35 00:09:01.709 clat percentiles (usec): 00:09:01.709 | 1.00th=[ 9765], 5.00th=[16057], 10.00th=[17171], 20.00th=[18482], 00:09:01.709 | 30.00th=[19268], 40.00th=[21103], 50.00th=[22152], 60.00th=[23200], 00:09:01.709 | 70.00th=[25297], 80.00th=[26084], 90.00th=[27395], 95.00th=[28967], 00:09:01.709 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32900], 99.95th=[32900], 00:09:01.709 | 99.99th=[34866] 00:09:01.709 bw ( KiB/s): min= 9680, max=12312, per=16.96%, avg=10996.00, stdev=1861.11, samples=2 00:09:01.709 iops : min= 2420, max= 3078, avg=2749.00, stdev=465.28, samples=2 00:09:01.709 lat (msec) : 10=0.64%, 20=19.60%, 50=79.76% 00:09:01.709 cpu : usr=1.99%, sys=8.45%, ctx=859, majf=0, minf=7 00:09:01.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:01.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.709 issued rwts: total=2560,2874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.709 00:09:01.709 Run status group 0 (all jobs): 00:09:01.709 READ: bw=60.7MiB/s (63.7MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.1MB), run=1002-1007msec 00:09:01.709 WRITE: bw=63.3MiB/s (66.4MB/s), 10.4MiB/s-22.0MiB/s (10.9MB/s-23.1MB/s), io=63.7MiB (66.8MB), run=1002-1007msec 00:09:01.709 00:09:01.709 Disk stats (read/write): 00:09:01.709 nvme0n1: ios=4658/5109, merge=0/0, ticks=25782/22715, in_queue=48497, util=87.88% 00:09:01.709 nvme0n2: ios=2081/2365, merge=0/0, ticks=15908/17791, in_queue=33699, util=87.35% 00:09:01.709 nvme0n3: ios=4096/4512, merge=0/0, ticks=11813/11928, in_queue=23741, util=89.12% 00:09:01.709 nvme0n4: ios=2067/2560, merge=0/0, ticks=16655/17234, in_queue=33889, util=89.68% 00:09:01.709 13:18:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:01.709 [global] 00:09:01.709 thread=1 00:09:01.709 invalidate=1 00:09:01.709 rw=randwrite 00:09:01.709 time_based=1 00:09:01.709 runtime=1 00:09:01.709 ioengine=libaio 00:09:01.709 direct=1 00:09:01.709 bs=4096 00:09:01.709 iodepth=128 00:09:01.709 norandommap=0 00:09:01.709 numjobs=1 00:09:01.709 00:09:01.709 verify_dump=1 00:09:01.709 verify_backlog=512 00:09:01.709 verify_state_save=0 00:09:01.709 do_verify=1 00:09:01.709 verify=crc32c-intel 00:09:01.709 [job0] 00:09:01.709 filename=/dev/nvme0n1 00:09:01.709 [job1] 00:09:01.709 filename=/dev/nvme0n2 00:09:01.709 [job2] 00:09:01.709 filename=/dev/nvme0n3 00:09:01.709 [job3] 00:09:01.709 filename=/dev/nvme0n4 00:09:01.709 Could not set queue depth (nvme0n1) 00:09:01.709 Could not set queue depth (nvme0n2) 00:09:01.709 Could not set queue depth (nvme0n3) 00:09:01.709 Could not set queue depth (nvme0n4) 00:09:01.709 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.709 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.709 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.709 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.709 fio-3.35 00:09:01.709 Starting 4 threads 00:09:03.086 00:09:03.086 job0: (groupid=0, jobs=1): err= 0: pid=66519: Sun Nov 17 13:18:51 2024 00:09:03.086 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:09:03.086 slat (usec): min=4, max=5913, avg=101.77, stdev=516.71 00:09:03.086 clat (usec): min=1465, max=18456, avg=12767.77, stdev=1644.88 00:09:03.086 lat (usec): min=1477, max=21790, avg=12869.53, stdev=1692.62 00:09:03.086 clat percentiles (usec): 00:09:03.086 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[12125], 00:09:03.086 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:09:03.086 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14222], 95.00th=[15664], 00:09:03.086 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:09:03.086 | 99.99th=[18482] 00:09:03.086 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:03.086 slat (usec): min=10, max=6645, avg=89.94, stdev=459.16 00:09:03.086 clat (usec): min=5751, max=18654, avg=12511.72, stdev=1445.93 00:09:03.086 lat (usec): min=5777, max=19064, avg=12601.66, stdev=1507.50 00:09:03.086 clat percentiles (usec): 00:09:03.086 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[11076], 20.00th=[11731], 00:09:03.086 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:09:03.086 | 70.00th=[13042], 80.00th=[13566], 90.00th=[13698], 95.00th=[15139], 00:09:03.086 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:09:03.086 | 99.99th=[18744] 00:09:03.086 bw ( KiB/s): min=20480, max=20521, per=26.45%, avg=20500.50, stdev=28.99, samples=2 00:09:03.086 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:03.086 lat (msec) : 2=0.04%, 4=0.22%, 10=3.70%, 20=96.04% 00:09:03.086 cpu : usr=4.80%, sys=13.49%, ctx=434, majf=0, minf=13 00:09:03.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:03.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.086 issued rwts: total=4905,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.086 job1: (groupid=0, jobs=1): err= 0: pid=66520: Sun Nov 17 13:18:51 2024 00:09:03.086 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec) 00:09:03.086 slat (usec): min=5, max=7453, avg=100.97, stdev=501.44 00:09:03.086 clat (usec): min=551, max=20266, avg=13093.81, stdev=1434.52 00:09:03.086 lat (usec): min=4595, max=20287, avg=13194.78, stdev=1451.72 00:09:03.086 clat percentiles (usec): 00:09:03.086 | 1.00th=[ 5211], 5.00th=[11338], 10.00th=[11994], 20.00th=[12518], 00:09:03.086 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:03.086 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14877], 00:09:03.086 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18220], 99.95th=[19268], 00:09:03.086 | 99.99th=[20317] 00:09:03.086 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:03.086 slat (usec): min=10, max=9261, avg=94.71, stdev=540.72 00:09:03.086 clat (usec): min=6519, max=22466, avg=12673.29, stdev=1450.37 00:09:03.086 lat (usec): min=6567, max=22486, avg=12768.00, stdev=1535.42 00:09:03.086 clat percentiles (usec): 00:09:03.086 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11469], 20.00th=[11994], 00:09:03.086 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:09:03.086 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13829], 95.00th=[15795], 00:09:03.086 | 99.00th=[17171], 99.50th=[17957], 99.90th=[21627], 99.95th=[21627], 00:09:03.086 | 99.99th=[22414] 00:09:03.086 bw ( KiB/s): min=20464, max=20521, per=26.44%, avg=20492.50, stdev=40.31, samples=2 00:09:03.086 iops : min= 5116, max= 5130, avg=5123.00, stdev= 9.90, samples=2 00:09:03.086 lat (usec) : 750=0.01% 00:09:03.086 lat (msec) : 10=2.83%, 20=97.07%, 50=0.09% 00:09:03.086 cpu : usr=5.08%, sys=12.56%, ctx=327, majf=0, minf=11 00:09:03.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:03.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.087 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.087 job2: (groupid=0, jobs=1): err= 0: pid=66521: Sun Nov 17 13:18:51 2024 00:09:03.087 read: IOPS=4335, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1003msec) 00:09:03.087 slat (usec): min=9, max=6859, avg=107.49, stdev=688.01 00:09:03.087 clat (usec): min=1923, max=23693, avg=14837.36, stdev=1732.37 00:09:03.087 lat (usec): min=7382, max=28060, avg=14944.85, stdev=1763.51 00:09:03.087 clat percentiles (usec): 00:09:03.087 | 1.00th=[ 8160], 5.00th=[10552], 10.00th=[14091], 20.00th=[14615], 00:09:03.087 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:09:03.087 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[15926], 00:09:03.087 | 99.00th=[22414], 99.50th=[23200], 99.90th=[23725], 99.95th=[23725], 00:09:03.087 | 99.99th=[23725] 00:09:03.087 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:03.087 slat (usec): min=7, max=11038, avg=107.90, stdev=660.33 00:09:03.087 clat (usec): min=7435, max=19951, avg=13586.40, stdev=1307.32 00:09:03.087 lat (usec): min=9542, max=19973, avg=13694.30, stdev=1169.14 00:09:03.087 clat percentiles (usec): 00:09:03.087 | 1.00th=[ 8717], 5.00th=[11994], 10.00th=[12518], 20.00th=[12911], 00:09:03.087 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:09:03.087 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14746], 00:09:03.087 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:09:03.087 | 99.99th=[20055] 00:09:03.087 bw ( KiB/s): min=17928, max=18936, per=23.78%, avg=18432.00, stdev=712.76, samples=2 00:09:03.087 iops : min= 4482, max= 4734, avg=4608.00, stdev=178.19, samples=2 00:09:03.087 lat (msec) : 2=0.01%, 10=2.99%, 20=96.09%, 50=0.90% 00:09:03.087 cpu : usr=3.39%, sys=12.77%, ctx=182, majf=0, minf=13 00:09:03.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:03.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.087 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.087 job3: (groupid=0, jobs=1): err= 0: pid=66522: Sun Nov 17 13:18:51 2024 00:09:03.087 read: IOPS=4252, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec) 00:09:03.087 slat (usec): min=7, max=6511, avg=111.55, stdev=538.02 00:09:03.087 clat (usec): min=412, max=18998, avg=14552.99, stdev=1496.33 00:09:03.087 lat (usec): min=3492, max=19009, avg=14664.53, stdev=1401.65 00:09:03.087 clat percentiles (usec): 00:09:03.087 | 1.00th=[ 7570], 5.00th=[12125], 10.00th=[14222], 20.00th=[14353], 00:09:03.087 | 30.00th=[14484], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:09:03.087 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15270], 95.00th=[15533], 00:09:03.087 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:09:03.087 | 99.99th=[19006] 00:09:03.087 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:09:03.087 slat (usec): min=11, max=3232, avg=106.16, stdev=462.64 00:09:03.087 clat (usec): min=10028, max=14878, avg=13984.53, stdev=614.35 00:09:03.087 lat (usec): min=11405, max=14925, avg=14090.69, stdev=398.82 00:09:03.087 clat percentiles (usec): 00:09:03.087 | 1.00th=[11207], 5.00th=[12911], 10.00th=[13566], 20.00th=[13698], 00:09:03.087 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:09:03.087 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14615], 95.00th=[14615], 00:09:03.087 | 99.00th=[14746], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:09:03.087 | 99.99th=[14877] 00:09:03.087 bw ( KiB/s): min=18220, max=18680, per=23.80%, avg=18450.00, stdev=325.27, samples=2 00:09:03.087 iops : min= 4555, max= 4670, avg=4612.50, stdev=81.32, samples=2 00:09:03.087 lat (usec) : 500=0.01% 00:09:03.087 lat (msec) : 4=0.28%, 10=0.44%, 20=99.27% 00:09:03.087 cpu : usr=3.60%, sys=13.29%, ctx=279, majf=0, minf=16 00:09:03.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:03.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.087 issued rwts: total=4257,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.087 00:09:03.087 Run status group 0 (all jobs): 00:09:03.087 READ: bw=71.0MiB/s (74.4MB/s), 16.6MiB/s-19.1MiB/s (17.4MB/s-20.1MB/s), io=71.3MiB (74.7MB), run=1001-1004msec 00:09:03.087 WRITE: bw=75.7MiB/s (79.4MB/s), 17.9MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1001-1004msec 00:09:03.087 00:09:03.087 Disk stats (read/write): 00:09:03.087 nvme0n1: ios=4146/4559, merge=0/0, ticks=24732/24767, in_queue=49499, util=88.58% 00:09:03.087 nvme0n2: ios=4145/4355, merge=0/0, ticks=26155/22928, in_queue=49083, util=88.69% 00:09:03.087 nvme0n3: ios=3584/4096, merge=0/0, ticks=50415/51501, in_queue=101916, util=89.30% 00:09:03.087 nvme0n4: ios=3584/4096, merge=0/0, ticks=11880/12516, in_queue=24396, util=89.75% 00:09:03.087 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:03.087 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66540 00:09:03.087 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:03.087 13:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:03.087 [global] 00:09:03.087 thread=1 00:09:03.087 invalidate=1 00:09:03.087 rw=read 00:09:03.087 time_based=1 00:09:03.087 runtime=10 00:09:03.087 ioengine=libaio 00:09:03.087 direct=1 00:09:03.087 bs=4096 00:09:03.087 iodepth=1 00:09:03.087 norandommap=1 00:09:03.087 numjobs=1 00:09:03.087 00:09:03.087 [job0] 00:09:03.087 filename=/dev/nvme0n1 00:09:03.087 [job1] 00:09:03.087 filename=/dev/nvme0n2 00:09:03.087 [job2] 00:09:03.087 filename=/dev/nvme0n3 00:09:03.087 [job3] 00:09:03.087 filename=/dev/nvme0n4 00:09:03.087 Could not set queue depth (nvme0n1) 00:09:03.087 Could not set queue depth (nvme0n2) 00:09:03.087 Could not set queue depth (nvme0n3) 00:09:03.087 Could not set queue depth (nvme0n4) 00:09:03.087 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.087 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.087 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.087 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:03.087 fio-3.35 00:09:03.087 Starting 4 threads 00:09:06.405 13:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:06.405 fio: pid=66584, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.406 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39919616, buflen=4096 00:09:06.406 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:06.406 fio: pid=66583, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.406 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45453312, buflen=4096 00:09:06.406 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.406 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:06.665 fio: pid=66581, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.665 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14614528, buflen=4096 00:09:06.665 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.665 13:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:06.923 fio: pid=66582, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:06.923 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19087360, buflen=4096 00:09:06.923 00:09:06.923 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66581: Sun Nov 17 13:18:56 2024 00:09:06.923 read: IOPS=5773, BW=22.6MiB/s (23.6MB/s)(77.9MiB/3456msec) 00:09:06.923 slat (usec): min=9, max=13940, avg=14.51, stdev=147.15 00:09:06.923 clat (usec): min=117, max=1813, avg=157.43, stdev=24.37 00:09:06.923 lat (usec): min=128, max=14217, avg=171.94, stdev=149.97 00:09:06.923 clat percentiles (usec): 00:09:06.923 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:06.923 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:09:06.923 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:09:06.923 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 269], 99.95th=[ 396], 00:09:06.923 | 99.99th=[ 1631] 00:09:06.923 bw ( KiB/s): min=22160, max=23656, per=35.03%, avg=23177.67, stdev=682.89, samples=6 00:09:06.923 iops : min= 5540, max= 5914, avg=5794.33, stdev=170.75, samples=6 00:09:06.923 lat (usec) : 250=99.87%, 500=0.10%, 750=0.01%, 1000=0.01% 00:09:06.923 lat (msec) : 2=0.02% 00:09:06.923 cpu : usr=1.79%, sys=6.25%, ctx=19958, majf=0, minf=1 00:09:06.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.923 issued rwts: total=19953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.923 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66582: Sun Nov 17 13:18:56 2024 00:09:06.923 read: IOPS=5628, BW=22.0MiB/s (23.1MB/s)(82.2MiB/3739msec) 00:09:06.923 slat (usec): min=9, max=10412, avg=14.48, stdev=138.81 00:09:06.923 clat (usec): min=3, max=25089, avg=161.89, stdev=176.11 00:09:06.924 lat (usec): min=128, max=25116, avg=176.37, stdev=224.67 00:09:06.924 clat percentiles (usec): 00:09:06.924 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:06.924 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:09:06.924 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 192], 00:09:06.924 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 388], 99.95th=[ 603], 00:09:06.924 | 99.99th=[ 2868] 00:09:06.924 bw ( KiB/s): min=21353, max=23200, per=34.03%, avg=22516.43, stdev=784.26, samples=7 00:09:06.924 iops : min= 5338, max= 5800, avg=5628.86, stdev=196.10, samples=7 00:09:06.924 lat (usec) : 4=0.01%, 250=99.78%, 500=0.14%, 750=0.04% 00:09:06.924 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01% 00:09:06.924 cpu : usr=1.61%, sys=6.21%, ctx=21052, majf=0, minf=1 00:09:06.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 issued rwts: total=21045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.924 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66583: Sun Nov 17 13:18:56 2024 00:09:06.924 read: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(43.3MiB/3200msec) 00:09:06.924 slat (usec): min=10, max=13858, avg=16.04, stdev=162.46 00:09:06.924 clat (usec): min=141, max=3248, avg=270.79, stdev=57.41 00:09:06.924 lat (usec): min=152, max=14048, avg=286.83, stdev=171.28 00:09:06.924 clat percentiles (usec): 00:09:06.924 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 258], 00:09:06.924 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:09:06.924 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:09:06.924 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 553], 99.95th=[ 709], 00:09:06.924 | 99.99th=[ 2024] 00:09:06.924 bw ( KiB/s): min=13258, max=14353, per=20.49%, avg=13553.83, stdev=434.91, samples=6 00:09:06.924 iops : min= 3314, max= 3588, avg=3388.33, stdev=108.70, samples=6 00:09:06.924 lat (usec) : 250=14.66%, 500=85.18%, 750=0.11%, 1000=0.02% 00:09:06.924 lat (msec) : 2=0.01%, 4=0.02% 00:09:06.924 cpu : usr=1.06%, sys=3.97%, ctx=11102, majf=0, minf=2 00:09:06.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 issued rwts: total=11098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.924 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66584: Sun Nov 17 13:18:56 2024 00:09:06.924 read: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(38.1MiB/2959msec) 00:09:06.924 slat (usec): min=10, max=181, avg=13.96, stdev= 4.28 00:09:06.924 clat (usec): min=152, max=7819, avg=287.91, stdev=124.84 00:09:06.924 lat (usec): min=164, max=7834, avg=301.87, stdev=124.91 00:09:06.924 clat percentiles (usec): 00:09:06.924 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:09:06.924 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:09:06.924 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:09:06.924 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 1385], 99.95th=[ 3687], 00:09:06.924 | 99.99th=[ 7832] 00:09:06.924 bw ( KiB/s): min=12908, max=13272, per=19.92%, avg=13181.60, stdev=154.44, samples=5 00:09:06.924 iops : min= 3227, max= 3318, avg=3295.40, stdev=38.61, samples=5 00:09:06.924 lat (usec) : 250=3.16%, 500=96.61%, 750=0.06%, 1000=0.04% 00:09:06.924 lat (msec) : 2=0.03%, 4=0.06%, 10=0.02% 00:09:06.924 cpu : usr=1.12%, sys=4.09%, ctx=9759, majf=0, minf=2 00:09:06.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.924 issued rwts: total=9747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.924 00:09:06.924 Run status group 0 (all jobs): 00:09:06.924 READ: bw=64.6MiB/s (67.7MB/s), 12.9MiB/s-22.6MiB/s (13.5MB/s-23.6MB/s), io=242MiB (253MB), run=2959-3739msec 00:09:06.924 00:09:06.924 Disk stats (read/write): 00:09:06.924 nvme0n1: ios=19420/0, merge=0/0, ticks=3150/0, in_queue=3150, util=95.36% 00:09:06.924 nvme0n2: ios=20312/0, merge=0/0, ticks=3387/0, in_queue=3387, util=95.77% 00:09:06.924 nvme0n3: ios=10661/0, merge=0/0, ticks=2991/0, in_queue=2991, util=96.24% 00:09:06.924 nvme0n4: ios=9443/0, merge=0/0, ticks=2730/0, in_queue=2730, util=96.26% 00:09:06.924 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:06.924 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:07.182 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.182 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:07.441 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.441 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:07.700 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.700 13:18:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:07.958 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:07.958 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66540 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:08.216 nvmf hotplug test: fio failed as expected 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:08.216 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.474 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.474 rmmod nvme_tcp 00:09:08.732 rmmod nvme_fabrics 00:09:08.732 rmmod nvme_keyring 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66167 ']' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66167 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66167 ']' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66167 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66167 00:09:08.732 killing process with pid 66167 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66167' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66167 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66167 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.732 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.990 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.990 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.990 13:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:08.990 00:09:08.990 real 0m19.074s 00:09:08.990 user 1m10.135s 00:09:08.990 sys 0m10.747s 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.990 ************************************ 00:09:08.990 END TEST nvmf_fio_target 00:09:08.990 ************************************ 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.990 13:18:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.249 ************************************ 00:09:09.249 START TEST nvmf_bdevio 00:09:09.249 ************************************ 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:09.249 * Looking for test storage... 00:09:09.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.249 --rc genhtml_branch_coverage=1 00:09:09.249 --rc genhtml_function_coverage=1 00:09:09.249 --rc genhtml_legend=1 00:09:09.249 --rc geninfo_all_blocks=1 00:09:09.249 --rc geninfo_unexecuted_blocks=1 00:09:09.249 00:09:09.249 ' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.249 --rc genhtml_branch_coverage=1 00:09:09.249 --rc genhtml_function_coverage=1 00:09:09.249 --rc genhtml_legend=1 00:09:09.249 --rc geninfo_all_blocks=1 00:09:09.249 --rc geninfo_unexecuted_blocks=1 00:09:09.249 00:09:09.249 ' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.249 --rc genhtml_branch_coverage=1 00:09:09.249 --rc genhtml_function_coverage=1 00:09:09.249 --rc genhtml_legend=1 00:09:09.249 --rc geninfo_all_blocks=1 00:09:09.249 --rc geninfo_unexecuted_blocks=1 00:09:09.249 00:09:09.249 ' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.249 --rc genhtml_branch_coverage=1 00:09:09.249 --rc genhtml_function_coverage=1 00:09:09.249 --rc genhtml_legend=1 00:09:09.249 --rc geninfo_all_blocks=1 00:09:09.249 --rc geninfo_unexecuted_blocks=1 00:09:09.249 00:09:09.249 ' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.249 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.250 Cannot find device "nvmf_init_br" 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.250 Cannot find device "nvmf_init_br2" 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:09.250 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.508 Cannot find device "nvmf_tgt_br" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.508 Cannot find device "nvmf_tgt_br2" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.508 Cannot find device "nvmf_init_br" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.508 Cannot find device "nvmf_init_br2" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.508 Cannot find device "nvmf_tgt_br" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.508 Cannot find device "nvmf_tgt_br2" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.508 Cannot find device "nvmf_br" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.508 Cannot find device "nvmf_init_if" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.508 Cannot find device "nvmf_init_if2" 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.508 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.509 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.540 ms 00:09:09.767 00:09:09.767 --- 10.0.0.3 ping statistics --- 00:09:09.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.767 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.767 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.767 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:09:09.767 00:09:09.767 --- 10.0.0.4 ping statistics --- 00:09:09.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.767 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:09.767 00:09:09.767 --- 10.0.0.1 ping statistics --- 00:09:09.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.767 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:09.767 00:09:09.767 --- 10.0.0.2 ping statistics --- 00:09:09.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.767 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66906 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66906 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66906 ']' 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.767 13:18:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:09.767 [2024-11-17 13:18:58.847099] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:09.767 [2024-11-17 13:18:58.847174] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.026 [2024-11-17 13:18:58.987611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.026 [2024-11-17 13:18:59.039183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.026 [2024-11-17 13:18:59.039229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.026 [2024-11-17 13:18:59.039239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.026 [2024-11-17 13:18:59.039246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.026 [2024-11-17 13:18:59.039252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.026 [2024-11-17 13:18:59.040802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:10.026 [2024-11-17 13:18:59.040893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:10.026 [2024-11-17 13:18:59.041015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:10.026 [2024-11-17 13:18:59.041019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.026 [2024-11-17 13:18:59.095567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.591 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.592 [2024-11-17 13:18:59.792003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.592 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.850 Malloc0 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.850 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.851 [2024-11-17 13:18:59.860476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.851 { 00:09:10.851 "params": { 00:09:10.851 "name": "Nvme$subsystem", 00:09:10.851 "trtype": "$TEST_TRANSPORT", 00:09:10.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.851 "adrfam": "ipv4", 00:09:10.851 "trsvcid": "$NVMF_PORT", 00:09:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.851 "hdgst": ${hdgst:-false}, 00:09:10.851 "ddgst": ${ddgst:-false} 00:09:10.851 }, 00:09:10.851 "method": "bdev_nvme_attach_controller" 00:09:10.851 } 00:09:10.851 EOF 00:09:10.851 )") 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:10.851 13:18:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.851 "params": { 00:09:10.851 "name": "Nvme1", 00:09:10.851 "trtype": "tcp", 00:09:10.851 "traddr": "10.0.0.3", 00:09:10.851 "adrfam": "ipv4", 00:09:10.851 "trsvcid": "4420", 00:09:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.851 "hdgst": false, 00:09:10.851 "ddgst": false 00:09:10.851 }, 00:09:10.851 "method": "bdev_nvme_attach_controller" 00:09:10.851 }' 00:09:10.851 [2024-11-17 13:18:59.913464] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:10.851 [2024-11-17 13:18:59.913527] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66942 ] 00:09:10.851 [2024-11-17 13:19:00.057401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.109 [2024-11-17 13:19:00.102258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.109 [2024-11-17 13:19:00.102404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.109 [2024-11-17 13:19:00.102411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.109 [2024-11-17 13:19:00.164575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.109 I/O targets: 00:09:11.109 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:11.109 00:09:11.109 00:09:11.109 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.109 http://cunit.sourceforge.net/ 00:09:11.109 00:09:11.109 00:09:11.109 Suite: bdevio tests on: Nvme1n1 00:09:11.109 Test: blockdev write read block ...passed 00:09:11.109 Test: blockdev write zeroes read block ...passed 00:09:11.109 Test: blockdev write zeroes read no split ...passed 00:09:11.109 Test: blockdev write zeroes read split ...passed 00:09:11.109 Test: blockdev write zeroes read split partial ...passed 00:09:11.109 Test: blockdev reset ...[2024-11-17 13:19:00.309569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:11.109 [2024-11-17 13:19:00.309660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e61180 (9): Bad file descriptor 00:09:11.109 [2024-11-17 13:19:00.326811] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:11.109 passed 00:09:11.109 Test: blockdev write read 8 blocks ...passed 00:09:11.109 Test: blockdev write read size > 128k ...passed 00:09:11.110 Test: blockdev write read invalid size ...passed 00:09:11.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:11.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:11.110 Test: blockdev write read max offset ...passed 00:09:11.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:11.369 Test: blockdev writev readv 8 blocks ...passed 00:09:11.369 Test: blockdev writev readv 30 x 1block ...passed 00:09:11.369 Test: blockdev writev readv block ...passed 00:09:11.369 Test: blockdev writev readv size > 128k ...passed 00:09:11.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:11.369 Test: blockdev comparev and writev ...[2024-11-17 13:19:00.336024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.336310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.336554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.336848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.337572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.337833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.338445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.338643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.338895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.339481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.339514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.339534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:11.369 [2024-11-17 13:19:00.339544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:11.369 passed 00:09:11.369 Test: blockdev nvme passthru rw ...passed 00:09:11.369 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:19:00.340587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.369 [2024-11-17 13:19:00.340615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.340725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.369 [2024-11-17 13:19:00.340741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:11.369 [2024-11-17 13:19:00.340865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.369 [2024-11-17 13:19:00.340882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:11.369 passed 00:09:11.369 Test: blockdev nvme admin passthru ...[2024-11-17 13:19:00.340974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:11.369 [2024-11-17 13:19:00.340995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:11.369 passed 00:09:11.369 Test: blockdev copy ...passed 00:09:11.369 00:09:11.369 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.369 suites 1 1 n/a 0 0 00:09:11.369 tests 23 23 23 0 0 00:09:11.369 asserts 152 152 152 0 n/a 00:09:11.369 00:09:11.369 Elapsed time = 0.162 seconds 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.369 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.628 rmmod nvme_tcp 00:09:11.628 rmmod nvme_fabrics 00:09:11.628 rmmod nvme_keyring 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66906 ']' 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66906 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66906 ']' 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66906 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66906 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:11.628 killing process with pid 66906 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66906' 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66906 00:09:11.628 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66906 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.886 13:19:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.886 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.886 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.886 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.886 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:12.145 00:09:12.145 real 0m2.942s 00:09:12.145 user 0m8.751s 00:09:12.145 sys 0m0.834s 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.145 ************************************ 00:09:12.145 END TEST nvmf_bdevio 00:09:12.145 ************************************ 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:12.145 00:09:12.145 real 2m34.395s 00:09:12.145 user 6m37.804s 00:09:12.145 sys 0m54.151s 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.145 ************************************ 00:09:12.145 END TEST nvmf_target_core 00:09:12.145 ************************************ 00:09:12.145 13:19:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:12.145 13:19:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.145 13:19:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.145 13:19:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.145 ************************************ 00:09:12.145 START TEST nvmf_target_extra 00:09:12.145 ************************************ 00:09:12.145 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:12.146 * Looking for test storage... 00:09:12.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:12.146 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.146 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.146 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.406 --rc genhtml_branch_coverage=1 00:09:12.406 --rc genhtml_function_coverage=1 00:09:12.406 --rc genhtml_legend=1 00:09:12.406 --rc geninfo_all_blocks=1 00:09:12.406 --rc geninfo_unexecuted_blocks=1 00:09:12.406 00:09:12.406 ' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.406 --rc genhtml_branch_coverage=1 00:09:12.406 --rc genhtml_function_coverage=1 00:09:12.406 --rc genhtml_legend=1 00:09:12.406 --rc geninfo_all_blocks=1 00:09:12.406 --rc geninfo_unexecuted_blocks=1 00:09:12.406 00:09:12.406 ' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.406 --rc genhtml_branch_coverage=1 00:09:12.406 --rc genhtml_function_coverage=1 00:09:12.406 --rc genhtml_legend=1 00:09:12.406 --rc geninfo_all_blocks=1 00:09:12.406 --rc geninfo_unexecuted_blocks=1 00:09:12.406 00:09:12.406 ' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.406 --rc genhtml_branch_coverage=1 00:09:12.406 --rc genhtml_function_coverage=1 00:09:12.406 --rc genhtml_legend=1 00:09:12.406 --rc geninfo_all_blocks=1 00:09:12.406 --rc geninfo_unexecuted_blocks=1 00:09:12.406 00:09:12.406 ' 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.406 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:12.407 ************************************ 00:09:12.407 START TEST nvmf_auth_target 00:09:12.407 ************************************ 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:12.407 * Looking for test storage... 00:09:12.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.407 --rc genhtml_branch_coverage=1 00:09:12.407 --rc genhtml_function_coverage=1 00:09:12.407 --rc genhtml_legend=1 00:09:12.407 --rc geninfo_all_blocks=1 00:09:12.407 --rc geninfo_unexecuted_blocks=1 00:09:12.407 00:09:12.407 ' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.407 --rc genhtml_branch_coverage=1 00:09:12.407 --rc genhtml_function_coverage=1 00:09:12.407 --rc genhtml_legend=1 00:09:12.407 --rc geninfo_all_blocks=1 00:09:12.407 --rc geninfo_unexecuted_blocks=1 00:09:12.407 00:09:12.407 ' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.407 --rc genhtml_branch_coverage=1 00:09:12.407 --rc genhtml_function_coverage=1 00:09:12.407 --rc genhtml_legend=1 00:09:12.407 --rc geninfo_all_blocks=1 00:09:12.407 --rc geninfo_unexecuted_blocks=1 00:09:12.407 00:09:12.407 ' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.407 --rc genhtml_branch_coverage=1 00:09:12.407 --rc genhtml_function_coverage=1 00:09:12.407 --rc genhtml_legend=1 00:09:12.407 --rc geninfo_all_blocks=1 00:09:12.407 --rc geninfo_unexecuted_blocks=1 00:09:12.407 00:09:12.407 ' 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.407 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.408 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:12.667 Cannot find device "nvmf_init_br" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:12.667 Cannot find device "nvmf_init_br2" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:12.667 Cannot find device "nvmf_tgt_br" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.667 Cannot find device "nvmf_tgt_br2" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:12.667 Cannot find device "nvmf_init_br" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:12.667 Cannot find device "nvmf_init_br2" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:12.667 Cannot find device "nvmf_tgt_br" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:12.667 Cannot find device "nvmf_tgt_br2" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:12.667 Cannot find device "nvmf_br" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:12.667 Cannot find device "nvmf_init_if" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:12.667 Cannot find device "nvmf_init_if2" 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.667 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:12.668 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.927 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.927 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:12.928 00:09:12.928 --- 10.0.0.3 ping statistics --- 00:09:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.928 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:12.928 00:09:12.928 --- 10.0.0.4 ping statistics --- 00:09:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.928 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:12.928 00:09:12.928 --- 10.0.0.1 ping statistics --- 00:09:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.928 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:12.928 00:09:12.928 --- 10.0.0.2 ping statistics --- 00:09:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.928 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67228 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67228 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67228 ']' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.928 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67257 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:13.187 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=19d64c49a49d6219dbc4183569c5c6dfde0b85b3789d601e 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.t3k 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 19d64c49a49d6219dbc4183569c5c6dfde0b85b3789d601e 0 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 19d64c49a49d6219dbc4183569c5c6dfde0b85b3789d601e 0 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=19d64c49a49d6219dbc4183569c5c6dfde0b85b3789d601e 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.t3k 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.t3k 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.t3k 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2ec6089cbadcc7477c9563f5de6a5af81164515e01410332c23457627d2b7088 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sOf 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2ec6089cbadcc7477c9563f5de6a5af81164515e01410332c23457627d2b7088 3 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2ec6089cbadcc7477c9563f5de6a5af81164515e01410332c23457627d2b7088 3 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2ec6089cbadcc7477c9563f5de6a5af81164515e01410332c23457627d2b7088 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sOf 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sOf 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.sOf 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=84b6fffbfe0c455c3bdf64bcc3ded6c9 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XRU 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 84b6fffbfe0c455c3bdf64bcc3ded6c9 1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 84b6fffbfe0c455c3bdf64bcc3ded6c9 1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=84b6fffbfe0c455c3bdf64bcc3ded6c9 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XRU 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XRU 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.XRU 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a9cbcbc6795c35eef8563bf602d86f83d9e2e8b1a4b0fb03 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k9h 00:09:13.447 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a9cbcbc6795c35eef8563bf602d86f83d9e2e8b1a4b0fb03 2 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a9cbcbc6795c35eef8563bf602d86f83d9e2e8b1a4b0fb03 2 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a9cbcbc6795c35eef8563bf602d86f83d9e2e8b1a4b0fb03 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:13.448 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k9h 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k9h 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k9h 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f403ffc60024c56900b7978bbf66fa1ac8eff8b241b26b7 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6t9 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f403ffc60024c56900b7978bbf66fa1ac8eff8b241b26b7 2 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f403ffc60024c56900b7978bbf66fa1ac8eff8b241b26b7 2 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f403ffc60024c56900b7978bbf66fa1ac8eff8b241b26b7 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6t9 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6t9 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6t9 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:13.707 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da17122b83adbad72c11fc5c328fb425 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lgD 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da17122b83adbad72c11fc5c328fb425 1 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da17122b83adbad72c11fc5c328fb425 1 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da17122b83adbad72c11fc5c328fb425 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lgD 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lgD 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.lgD 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aab9c03a97f806c96c3251865db19b6f63a559ef3478ea3cf0a29bebbad5fcae 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZOv 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aab9c03a97f806c96c3251865db19b6f63a559ef3478ea3cf0a29bebbad5fcae 3 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aab9c03a97f806c96c3251865db19b6f63a559ef3478ea3cf0a29bebbad5fcae 3 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aab9c03a97f806c96c3251865db19b6f63a559ef3478ea3cf0a29bebbad5fcae 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZOv 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZOv 00:09:13.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ZOv 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67228 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67228 ']' 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.708 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67257 /var/tmp/host.sock 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67257 ']' 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.275 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.t3k 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.t3k 00:09:14.534 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.t3k 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.sOf ]] 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sOf 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sOf 00:09:14.793 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sOf 00:09:15.051 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:15.051 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XRU 00:09:15.052 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.052 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.052 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.052 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.XRU 00:09:15.052 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.XRU 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k9h ]] 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k9h 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k9h 00:09:15.310 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k9h 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6t9 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6t9 00:09:15.569 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6t9 00:09:15.827 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.lgD ]] 00:09:15.827 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lgD 00:09:15.827 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.827 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.828 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.828 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lgD 00:09:15.828 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lgD 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZOv 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZOv 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZOv 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:16.086 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.345 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.346 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.914 00:09:16.914 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:16.914 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:16.914 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:16.914 { 00:09:16.914 "cntlid": 1, 00:09:16.914 "qid": 0, 00:09:16.914 "state": "enabled", 00:09:16.914 "thread": "nvmf_tgt_poll_group_000", 00:09:16.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:16.914 "listen_address": { 00:09:16.914 "trtype": "TCP", 00:09:16.914 "adrfam": "IPv4", 00:09:16.914 "traddr": "10.0.0.3", 00:09:16.914 "trsvcid": "4420" 00:09:16.914 }, 00:09:16.914 "peer_address": { 00:09:16.914 "trtype": "TCP", 00:09:16.914 "adrfam": "IPv4", 00:09:16.914 "traddr": "10.0.0.1", 00:09:16.914 "trsvcid": "33592" 00:09:16.914 }, 00:09:16.914 "auth": { 00:09:16.914 "state": "completed", 00:09:16.914 "digest": "sha256", 00:09:16.914 "dhgroup": "null" 00:09:16.914 } 00:09:16.914 } 00:09:16.914 ]' 00:09:16.914 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:17.173 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:17.432 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:17.432 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:21.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:21.656 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:21.657 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:21.657 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:21.916 { 00:09:21.916 "cntlid": 3, 00:09:21.916 "qid": 0, 00:09:21.916 "state": "enabled", 00:09:21.916 "thread": "nvmf_tgt_poll_group_000", 00:09:21.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:21.916 "listen_address": { 00:09:21.916 "trtype": "TCP", 00:09:21.916 "adrfam": "IPv4", 00:09:21.916 "traddr": "10.0.0.3", 00:09:21.916 "trsvcid": "4420" 00:09:21.916 }, 00:09:21.916 "peer_address": { 00:09:21.916 "trtype": "TCP", 00:09:21.916 "adrfam": "IPv4", 00:09:21.916 "traddr": "10.0.0.1", 00:09:21.916 "trsvcid": "33626" 00:09:21.916 }, 00:09:21.916 "auth": { 00:09:21.916 "state": "completed", 00:09:21.916 "digest": "sha256", 00:09:21.916 "dhgroup": "null" 00:09:21.916 } 00:09:21.916 } 00:09:21.916 ]' 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:21.916 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:22.175 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:22.175 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:22.175 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:22.175 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:22.175 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:22.434 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:22.434 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:23.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:23.002 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:23.260 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:23.519 00:09:23.519 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:23.519 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:23.519 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:23.777 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:23.777 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:23.777 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.778 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.037 { 00:09:24.037 "cntlid": 5, 00:09:24.037 "qid": 0, 00:09:24.037 "state": "enabled", 00:09:24.037 "thread": "nvmf_tgt_poll_group_000", 00:09:24.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:24.037 "listen_address": { 00:09:24.037 "trtype": "TCP", 00:09:24.037 "adrfam": "IPv4", 00:09:24.037 "traddr": "10.0.0.3", 00:09:24.037 "trsvcid": "4420" 00:09:24.037 }, 00:09:24.037 "peer_address": { 00:09:24.037 "trtype": "TCP", 00:09:24.037 "adrfam": "IPv4", 00:09:24.037 "traddr": "10.0.0.1", 00:09:24.037 "trsvcid": "33660" 00:09:24.037 }, 00:09:24.037 "auth": { 00:09:24.037 "state": "completed", 00:09:24.037 "digest": "sha256", 00:09:24.037 "dhgroup": "null" 00:09:24.037 } 00:09:24.037 } 00:09:24.037 ]' 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.037 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:24.296 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:24.296 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:24.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:24.863 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:25.122 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:25.381 00:09:25.381 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:25.381 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:25.381 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:25.640 { 00:09:25.640 "cntlid": 7, 00:09:25.640 "qid": 0, 00:09:25.640 "state": "enabled", 00:09:25.640 "thread": "nvmf_tgt_poll_group_000", 00:09:25.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:25.640 "listen_address": { 00:09:25.640 "trtype": "TCP", 00:09:25.640 "adrfam": "IPv4", 00:09:25.640 "traddr": "10.0.0.3", 00:09:25.640 "trsvcid": "4420" 00:09:25.640 }, 00:09:25.640 "peer_address": { 00:09:25.640 "trtype": "TCP", 00:09:25.640 "adrfam": "IPv4", 00:09:25.640 "traddr": "10.0.0.1", 00:09:25.640 "trsvcid": "56578" 00:09:25.640 }, 00:09:25.640 "auth": { 00:09:25.640 "state": "completed", 00:09:25.640 "digest": "sha256", 00:09:25.640 "dhgroup": "null" 00:09:25.640 } 00:09:25.640 } 00:09:25.640 ]' 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:25.640 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:25.898 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:26.157 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:26.157 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:26.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:26.725 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:26.984 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:26.984 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:27.243 00:09:27.243 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:27.243 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:27.243 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:27.501 { 00:09:27.501 "cntlid": 9, 00:09:27.501 "qid": 0, 00:09:27.501 "state": "enabled", 00:09:27.501 "thread": "nvmf_tgt_poll_group_000", 00:09:27.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:27.501 "listen_address": { 00:09:27.501 "trtype": "TCP", 00:09:27.501 "adrfam": "IPv4", 00:09:27.501 "traddr": "10.0.0.3", 00:09:27.501 "trsvcid": "4420" 00:09:27.501 }, 00:09:27.501 "peer_address": { 00:09:27.501 "trtype": "TCP", 00:09:27.501 "adrfam": "IPv4", 00:09:27.501 "traddr": "10.0.0.1", 00:09:27.501 "trsvcid": "56598" 00:09:27.501 }, 00:09:27.501 "auth": { 00:09:27.501 "state": "completed", 00:09:27.501 "digest": "sha256", 00:09:27.501 "dhgroup": "ffdhe2048" 00:09:27.501 } 00:09:27.501 } 00:09:27.501 ]' 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:27.501 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:27.502 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:27.502 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:27.502 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:27.760 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:27.760 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:27.760 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:28.019 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:28.019 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:28.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:28.587 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:28.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:29.104 00:09:29.104 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:29.104 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:29.104 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:29.363 { 00:09:29.363 "cntlid": 11, 00:09:29.363 "qid": 0, 00:09:29.363 "state": "enabled", 00:09:29.363 "thread": "nvmf_tgt_poll_group_000", 00:09:29.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:29.363 "listen_address": { 00:09:29.363 "trtype": "TCP", 00:09:29.363 "adrfam": "IPv4", 00:09:29.363 "traddr": "10.0.0.3", 00:09:29.363 "trsvcid": "4420" 00:09:29.363 }, 00:09:29.363 "peer_address": { 00:09:29.363 "trtype": "TCP", 00:09:29.363 "adrfam": "IPv4", 00:09:29.363 "traddr": "10.0.0.1", 00:09:29.363 "trsvcid": "56618" 00:09:29.363 }, 00:09:29.363 "auth": { 00:09:29.363 "state": "completed", 00:09:29.363 "digest": "sha256", 00:09:29.363 "dhgroup": "ffdhe2048" 00:09:29.363 } 00:09:29.363 } 00:09:29.363 ]' 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:29.363 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:29.622 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:29.622 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:29.622 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:29.622 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:29.622 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:29.880 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:29.880 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:30.447 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:30.706 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:30.965 00:09:30.965 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:30.965 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.965 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:31.224 { 00:09:31.224 "cntlid": 13, 00:09:31.224 "qid": 0, 00:09:31.224 "state": "enabled", 00:09:31.224 "thread": "nvmf_tgt_poll_group_000", 00:09:31.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:31.224 "listen_address": { 00:09:31.224 "trtype": "TCP", 00:09:31.224 "adrfam": "IPv4", 00:09:31.224 "traddr": "10.0.0.3", 00:09:31.224 "trsvcid": "4420" 00:09:31.224 }, 00:09:31.224 "peer_address": { 00:09:31.224 "trtype": "TCP", 00:09:31.224 "adrfam": "IPv4", 00:09:31.224 "traddr": "10.0.0.1", 00:09:31.224 "trsvcid": "56652" 00:09:31.224 }, 00:09:31.224 "auth": { 00:09:31.224 "state": "completed", 00:09:31.224 "digest": "sha256", 00:09:31.224 "dhgroup": "ffdhe2048" 00:09:31.224 } 00:09:31.224 } 00:09:31.224 ]' 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:31.224 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.791 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:31.791 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:32.359 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:32.618 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:32.876 00:09:32.876 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:32.876 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:32.876 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.135 { 00:09:33.135 "cntlid": 15, 00:09:33.135 "qid": 0, 00:09:33.135 "state": "enabled", 00:09:33.135 "thread": "nvmf_tgt_poll_group_000", 00:09:33.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:33.135 "listen_address": { 00:09:33.135 "trtype": "TCP", 00:09:33.135 "adrfam": "IPv4", 00:09:33.135 "traddr": "10.0.0.3", 00:09:33.135 "trsvcid": "4420" 00:09:33.135 }, 00:09:33.135 "peer_address": { 00:09:33.135 "trtype": "TCP", 00:09:33.135 "adrfam": "IPv4", 00:09:33.135 "traddr": "10.0.0.1", 00:09:33.135 "trsvcid": "56684" 00:09:33.135 }, 00:09:33.135 "auth": { 00:09:33.135 "state": "completed", 00:09:33.135 "digest": "sha256", 00:09:33.135 "dhgroup": "ffdhe2048" 00:09:33.135 } 00:09:33.135 } 00:09:33.135 ]' 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:33.135 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.396 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.396 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.396 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.683 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:33.683 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:34.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:34.250 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:34.509 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:34.767 00:09:34.767 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.767 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:34.767 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:35.335 { 00:09:35.335 "cntlid": 17, 00:09:35.335 "qid": 0, 00:09:35.335 "state": "enabled", 00:09:35.335 "thread": "nvmf_tgt_poll_group_000", 00:09:35.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:35.335 "listen_address": { 00:09:35.335 "trtype": "TCP", 00:09:35.335 "adrfam": "IPv4", 00:09:35.335 "traddr": "10.0.0.3", 00:09:35.335 "trsvcid": "4420" 00:09:35.335 }, 00:09:35.335 "peer_address": { 00:09:35.335 "trtype": "TCP", 00:09:35.335 "adrfam": "IPv4", 00:09:35.335 "traddr": "10.0.0.1", 00:09:35.335 "trsvcid": "33556" 00:09:35.335 }, 00:09:35.335 "auth": { 00:09:35.335 "state": "completed", 00:09:35.335 "digest": "sha256", 00:09:35.335 "dhgroup": "ffdhe3072" 00:09:35.335 } 00:09:35.335 } 00:09:35.335 ]' 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.335 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:35.594 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:35.594 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:36.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:36.161 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:36.431 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:36.690 00:09:36.690 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:36.690 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.690 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:37.258 { 00:09:37.258 "cntlid": 19, 00:09:37.258 "qid": 0, 00:09:37.258 "state": "enabled", 00:09:37.258 "thread": "nvmf_tgt_poll_group_000", 00:09:37.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:37.258 "listen_address": { 00:09:37.258 "trtype": "TCP", 00:09:37.258 "adrfam": "IPv4", 00:09:37.258 "traddr": "10.0.0.3", 00:09:37.258 "trsvcid": "4420" 00:09:37.258 }, 00:09:37.258 "peer_address": { 00:09:37.258 "trtype": "TCP", 00:09:37.258 "adrfam": "IPv4", 00:09:37.258 "traddr": "10.0.0.1", 00:09:37.258 "trsvcid": "33580" 00:09:37.258 }, 00:09:37.258 "auth": { 00:09:37.258 "state": "completed", 00:09:37.258 "digest": "sha256", 00:09:37.258 "dhgroup": "ffdhe3072" 00:09:37.258 } 00:09:37.258 } 00:09:37.258 ]' 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:37.258 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:37.516 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:37.516 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:38.084 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:38.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:38.084 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:38.084 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.084 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.343 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.343 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.343 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:38.343 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.602 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.861 00:09:38.861 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:38.861 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:38.861 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.120 { 00:09:39.120 "cntlid": 21, 00:09:39.120 "qid": 0, 00:09:39.120 "state": "enabled", 00:09:39.120 "thread": "nvmf_tgt_poll_group_000", 00:09:39.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:39.120 "listen_address": { 00:09:39.120 "trtype": "TCP", 00:09:39.120 "adrfam": "IPv4", 00:09:39.120 "traddr": "10.0.0.3", 00:09:39.120 "trsvcid": "4420" 00:09:39.120 }, 00:09:39.120 "peer_address": { 00:09:39.120 "trtype": "TCP", 00:09:39.120 "adrfam": "IPv4", 00:09:39.120 "traddr": "10.0.0.1", 00:09:39.120 "trsvcid": "33596" 00:09:39.120 }, 00:09:39.120 "auth": { 00:09:39.120 "state": "completed", 00:09:39.120 "digest": "sha256", 00:09:39.120 "dhgroup": "ffdhe3072" 00:09:39.120 } 00:09:39.120 } 00:09:39.120 ]' 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:39.120 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.379 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.379 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.379 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.379 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:39.379 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:39.945 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.511 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.511 00:09:40.770 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.770 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.770 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.028 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.028 { 00:09:41.028 "cntlid": 23, 00:09:41.028 "qid": 0, 00:09:41.028 "state": "enabled", 00:09:41.028 "thread": "nvmf_tgt_poll_group_000", 00:09:41.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:41.028 "listen_address": { 00:09:41.028 "trtype": "TCP", 00:09:41.028 "adrfam": "IPv4", 00:09:41.028 "traddr": "10.0.0.3", 00:09:41.028 "trsvcid": "4420" 00:09:41.029 }, 00:09:41.029 "peer_address": { 00:09:41.029 "trtype": "TCP", 00:09:41.029 "adrfam": "IPv4", 00:09:41.029 "traddr": "10.0.0.1", 00:09:41.029 "trsvcid": "33628" 00:09:41.029 }, 00:09:41.029 "auth": { 00:09:41.029 "state": "completed", 00:09:41.029 "digest": "sha256", 00:09:41.029 "dhgroup": "ffdhe3072" 00:09:41.029 } 00:09:41.029 } 00:09:41.029 ]' 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.029 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.288 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:41.288 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.225 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.793 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:42.793 { 00:09:42.793 "cntlid": 25, 00:09:42.793 "qid": 0, 00:09:42.793 "state": "enabled", 00:09:42.793 "thread": "nvmf_tgt_poll_group_000", 00:09:42.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:42.793 "listen_address": { 00:09:42.793 "trtype": "TCP", 00:09:42.793 "adrfam": "IPv4", 00:09:42.793 "traddr": "10.0.0.3", 00:09:42.793 "trsvcid": "4420" 00:09:42.793 }, 00:09:42.793 "peer_address": { 00:09:42.793 "trtype": "TCP", 00:09:42.793 "adrfam": "IPv4", 00:09:42.793 "traddr": "10.0.0.1", 00:09:42.793 "trsvcid": "33644" 00:09:42.793 }, 00:09:42.793 "auth": { 00:09:42.793 "state": "completed", 00:09:42.793 "digest": "sha256", 00:09:42.793 "dhgroup": "ffdhe4096" 00:09:42.793 } 00:09:42.793 } 00:09:42.793 ]' 00:09:42.793 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.053 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.311 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:43.311 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.877 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:43.878 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:43.878 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:44.136 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:44.136 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.136 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.137 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.705 00:09:44.705 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.705 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.705 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.997 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.997 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.998 { 00:09:44.998 "cntlid": 27, 00:09:44.998 "qid": 0, 00:09:44.998 "state": "enabled", 00:09:44.998 "thread": "nvmf_tgt_poll_group_000", 00:09:44.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:44.998 "listen_address": { 00:09:44.998 "trtype": "TCP", 00:09:44.998 "adrfam": "IPv4", 00:09:44.998 "traddr": "10.0.0.3", 00:09:44.998 "trsvcid": "4420" 00:09:44.998 }, 00:09:44.998 "peer_address": { 00:09:44.998 "trtype": "TCP", 00:09:44.998 "adrfam": "IPv4", 00:09:44.998 "traddr": "10.0.0.1", 00:09:44.998 "trsvcid": "33668" 00:09:44.998 }, 00:09:44.998 "auth": { 00:09:44.998 "state": "completed", 00:09:44.998 "digest": "sha256", 00:09:44.998 "dhgroup": "ffdhe4096" 00:09:44.998 } 00:09:44.998 } 00:09:44.998 ]' 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.998 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.256 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:45.256 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:45.824 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.083 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:46.651 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.651 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.910 { 00:09:46.910 "cntlid": 29, 00:09:46.910 "qid": 0, 00:09:46.910 "state": "enabled", 00:09:46.910 "thread": "nvmf_tgt_poll_group_000", 00:09:46.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:46.910 "listen_address": { 00:09:46.910 "trtype": "TCP", 00:09:46.910 "adrfam": "IPv4", 00:09:46.910 "traddr": "10.0.0.3", 00:09:46.910 "trsvcid": "4420" 00:09:46.910 }, 00:09:46.910 "peer_address": { 00:09:46.910 "trtype": "TCP", 00:09:46.910 "adrfam": "IPv4", 00:09:46.910 "traddr": "10.0.0.1", 00:09:46.910 "trsvcid": "35308" 00:09:46.910 }, 00:09:46.910 "auth": { 00:09:46.910 "state": "completed", 00:09:46.910 "digest": "sha256", 00:09:46.910 "dhgroup": "ffdhe4096" 00:09:46.910 } 00:09:46.910 } 00:09:46.910 ]' 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:46.910 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.910 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.910 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.910 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.169 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:47.169 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:48.106 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:48.106 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:48.674 00:09:48.674 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.674 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.674 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.933 { 00:09:48.933 "cntlid": 31, 00:09:48.933 "qid": 0, 00:09:48.933 "state": "enabled", 00:09:48.933 "thread": "nvmf_tgt_poll_group_000", 00:09:48.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:48.933 "listen_address": { 00:09:48.933 "trtype": "TCP", 00:09:48.933 "adrfam": "IPv4", 00:09:48.933 "traddr": "10.0.0.3", 00:09:48.933 "trsvcid": "4420" 00:09:48.933 }, 00:09:48.933 "peer_address": { 00:09:48.933 "trtype": "TCP", 00:09:48.933 "adrfam": "IPv4", 00:09:48.933 "traddr": "10.0.0.1", 00:09:48.933 "trsvcid": "35334" 00:09:48.933 }, 00:09:48.933 "auth": { 00:09:48.933 "state": "completed", 00:09:48.933 "digest": "sha256", 00:09:48.933 "dhgroup": "ffdhe4096" 00:09:48.933 } 00:09:48.933 } 00:09:48.933 ]' 00:09:48.933 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.933 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.191 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:49.191 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.136 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.397 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.397 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.397 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.397 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.655 00:09:50.655 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.655 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.655 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.914 { 00:09:50.914 "cntlid": 33, 00:09:50.914 "qid": 0, 00:09:50.914 "state": "enabled", 00:09:50.914 "thread": "nvmf_tgt_poll_group_000", 00:09:50.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:50.914 "listen_address": { 00:09:50.914 "trtype": "TCP", 00:09:50.914 "adrfam": "IPv4", 00:09:50.914 "traddr": "10.0.0.3", 00:09:50.914 "trsvcid": "4420" 00:09:50.914 }, 00:09:50.914 "peer_address": { 00:09:50.914 "trtype": "TCP", 00:09:50.914 "adrfam": "IPv4", 00:09:50.914 "traddr": "10.0.0.1", 00:09:50.914 "trsvcid": "35358" 00:09:50.914 }, 00:09:50.914 "auth": { 00:09:50.914 "state": "completed", 00:09:50.914 "digest": "sha256", 00:09:50.914 "dhgroup": "ffdhe6144" 00:09:50.914 } 00:09:50.914 } 00:09:50.914 ]' 00:09:50.914 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.173 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.431 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:51.432 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:52.000 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.259 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.827 00:09:52.827 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.827 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.827 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:53.086 { 00:09:53.086 "cntlid": 35, 00:09:53.086 "qid": 0, 00:09:53.086 "state": "enabled", 00:09:53.086 "thread": "nvmf_tgt_poll_group_000", 00:09:53.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:53.086 "listen_address": { 00:09:53.086 "trtype": "TCP", 00:09:53.086 "adrfam": "IPv4", 00:09:53.086 "traddr": "10.0.0.3", 00:09:53.086 "trsvcid": "4420" 00:09:53.086 }, 00:09:53.086 "peer_address": { 00:09:53.086 "trtype": "TCP", 00:09:53.086 "adrfam": "IPv4", 00:09:53.086 "traddr": "10.0.0.1", 00:09:53.086 "trsvcid": "35388" 00:09:53.086 }, 00:09:53.086 "auth": { 00:09:53.086 "state": "completed", 00:09:53.086 "digest": "sha256", 00:09:53.086 "dhgroup": "ffdhe6144" 00:09:53.086 } 00:09:53.086 } 00:09:53.086 ]' 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:53.086 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.654 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:53.654 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:54.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:54.222 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.481 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.049 00:09:55.049 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.049 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.049 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.308 { 00:09:55.308 "cntlid": 37, 00:09:55.308 "qid": 0, 00:09:55.308 "state": "enabled", 00:09:55.308 "thread": "nvmf_tgt_poll_group_000", 00:09:55.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:55.308 "listen_address": { 00:09:55.308 "trtype": "TCP", 00:09:55.308 "adrfam": "IPv4", 00:09:55.308 "traddr": "10.0.0.3", 00:09:55.308 "trsvcid": "4420" 00:09:55.308 }, 00:09:55.308 "peer_address": { 00:09:55.308 "trtype": "TCP", 00:09:55.308 "adrfam": "IPv4", 00:09:55.308 "traddr": "10.0.0.1", 00:09:55.308 "trsvcid": "56104" 00:09:55.308 }, 00:09:55.308 "auth": { 00:09:55.308 "state": "completed", 00:09:55.308 "digest": "sha256", 00:09:55.308 "dhgroup": "ffdhe6144" 00:09:55.308 } 00:09:55.308 } 00:09:55.308 ]' 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.308 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:55.567 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:55.567 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:56.505 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.072 00:09:57.072 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.072 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.072 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.332 { 00:09:57.332 "cntlid": 39, 00:09:57.332 "qid": 0, 00:09:57.332 "state": "enabled", 00:09:57.332 "thread": "nvmf_tgt_poll_group_000", 00:09:57.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:57.332 "listen_address": { 00:09:57.332 "trtype": "TCP", 00:09:57.332 "adrfam": "IPv4", 00:09:57.332 "traddr": "10.0.0.3", 00:09:57.332 "trsvcid": "4420" 00:09:57.332 }, 00:09:57.332 "peer_address": { 00:09:57.332 "trtype": "TCP", 00:09:57.332 "adrfam": "IPv4", 00:09:57.332 "traddr": "10.0.0.1", 00:09:57.332 "trsvcid": "56118" 00:09:57.332 }, 00:09:57.332 "auth": { 00:09:57.332 "state": "completed", 00:09:57.332 "digest": "sha256", 00:09:57.332 "dhgroup": "ffdhe6144" 00:09:57.332 } 00:09:57.332 } 00:09:57.332 ]' 00:09:57.332 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.592 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.851 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:57.851 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:09:58.419 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.419 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:09:58.419 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.419 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.677 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.677 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:58.677 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.677 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:58.677 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:58.936 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.503 00:09:59.503 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.503 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.503 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.762 { 00:09:59.762 "cntlid": 41, 00:09:59.762 "qid": 0, 00:09:59.762 "state": "enabled", 00:09:59.762 "thread": "nvmf_tgt_poll_group_000", 00:09:59.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:09:59.762 "listen_address": { 00:09:59.762 "trtype": "TCP", 00:09:59.762 "adrfam": "IPv4", 00:09:59.762 "traddr": "10.0.0.3", 00:09:59.762 "trsvcid": "4420" 00:09:59.762 }, 00:09:59.762 "peer_address": { 00:09:59.762 "trtype": "TCP", 00:09:59.762 "adrfam": "IPv4", 00:09:59.762 "traddr": "10.0.0.1", 00:09:59.762 "trsvcid": "56142" 00:09:59.762 }, 00:09:59.762 "auth": { 00:09:59.762 "state": "completed", 00:09:59.762 "digest": "sha256", 00:09:59.762 "dhgroup": "ffdhe8192" 00:09:59.762 } 00:09:59.762 } 00:09:59.762 ]' 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.762 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.021 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:00.021 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:00.587 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:00.588 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.155 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.722 00:10:01.723 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.723 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.723 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.981 { 00:10:01.981 "cntlid": 43, 00:10:01.981 "qid": 0, 00:10:01.981 "state": "enabled", 00:10:01.981 "thread": "nvmf_tgt_poll_group_000", 00:10:01.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:01.981 "listen_address": { 00:10:01.981 "trtype": "TCP", 00:10:01.981 "adrfam": "IPv4", 00:10:01.981 "traddr": "10.0.0.3", 00:10:01.981 "trsvcid": "4420" 00:10:01.981 }, 00:10:01.981 "peer_address": { 00:10:01.981 "trtype": "TCP", 00:10:01.981 "adrfam": "IPv4", 00:10:01.981 "traddr": "10.0.0.1", 00:10:01.981 "trsvcid": "56160" 00:10:01.981 }, 00:10:01.981 "auth": { 00:10:01.981 "state": "completed", 00:10:01.981 "digest": "sha256", 00:10:01.981 "dhgroup": "ffdhe8192" 00:10:01.981 } 00:10:01.981 } 00:10:01.981 ]' 00:10:01.981 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.981 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.240 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:02.240 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.178 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.744 00:10:03.744 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.744 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.744 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.003 { 00:10:04.003 "cntlid": 45, 00:10:04.003 "qid": 0, 00:10:04.003 "state": "enabled", 00:10:04.003 "thread": "nvmf_tgt_poll_group_000", 00:10:04.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:04.003 "listen_address": { 00:10:04.003 "trtype": "TCP", 00:10:04.003 "adrfam": "IPv4", 00:10:04.003 "traddr": "10.0.0.3", 00:10:04.003 "trsvcid": "4420" 00:10:04.003 }, 00:10:04.003 "peer_address": { 00:10:04.003 "trtype": "TCP", 00:10:04.003 "adrfam": "IPv4", 00:10:04.003 "traddr": "10.0.0.1", 00:10:04.003 "trsvcid": "56180" 00:10:04.003 }, 00:10:04.003 "auth": { 00:10:04.003 "state": "completed", 00:10:04.003 "digest": "sha256", 00:10:04.003 "dhgroup": "ffdhe8192" 00:10:04.003 } 00:10:04.003 } 00:10:04.003 ]' 00:10:04.003 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.262 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.262 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.262 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:04.262 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.263 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.263 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.263 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.521 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:04.521 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:05.089 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:05.348 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:05.916 00:10:05.916 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.916 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.916 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.174 { 00:10:06.174 "cntlid": 47, 00:10:06.174 "qid": 0, 00:10:06.174 "state": "enabled", 00:10:06.174 "thread": "nvmf_tgt_poll_group_000", 00:10:06.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:06.174 "listen_address": { 00:10:06.174 "trtype": "TCP", 00:10:06.174 "adrfam": "IPv4", 00:10:06.174 "traddr": "10.0.0.3", 00:10:06.174 "trsvcid": "4420" 00:10:06.174 }, 00:10:06.174 "peer_address": { 00:10:06.174 "trtype": "TCP", 00:10:06.174 "adrfam": "IPv4", 00:10:06.174 "traddr": "10.0.0.1", 00:10:06.174 "trsvcid": "35656" 00:10:06.174 }, 00:10:06.174 "auth": { 00:10:06.174 "state": "completed", 00:10:06.174 "digest": "sha256", 00:10:06.174 "dhgroup": "ffdhe8192" 00:10:06.174 } 00:10:06.174 } 00:10:06.174 ]' 00:10:06.174 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.433 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.692 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:06.692 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.630 13:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.889 00:10:07.889 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.889 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.889 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.147 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.147 { 00:10:08.147 "cntlid": 49, 00:10:08.147 "qid": 0, 00:10:08.147 "state": "enabled", 00:10:08.147 "thread": "nvmf_tgt_poll_group_000", 00:10:08.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:08.147 "listen_address": { 00:10:08.147 "trtype": "TCP", 00:10:08.147 "adrfam": "IPv4", 00:10:08.147 "traddr": "10.0.0.3", 00:10:08.147 "trsvcid": "4420" 00:10:08.147 }, 00:10:08.147 "peer_address": { 00:10:08.147 "trtype": "TCP", 00:10:08.147 "adrfam": "IPv4", 00:10:08.147 "traddr": "10.0.0.1", 00:10:08.147 "trsvcid": "35690" 00:10:08.147 }, 00:10:08.147 "auth": { 00:10:08.147 "state": "completed", 00:10:08.147 "digest": "sha384", 00:10:08.148 "dhgroup": "null" 00:10:08.148 } 00:10:08.148 } 00:10:08.148 ]' 00:10:08.148 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.406 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.665 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:08.665 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:09.233 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.492 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.751 00:10:09.751 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.751 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.751 13:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.336 { 00:10:10.336 "cntlid": 51, 00:10:10.336 "qid": 0, 00:10:10.336 "state": "enabled", 00:10:10.336 "thread": "nvmf_tgt_poll_group_000", 00:10:10.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:10.336 "listen_address": { 00:10:10.336 "trtype": "TCP", 00:10:10.336 "adrfam": "IPv4", 00:10:10.336 "traddr": "10.0.0.3", 00:10:10.336 "trsvcid": "4420" 00:10:10.336 }, 00:10:10.336 "peer_address": { 00:10:10.336 "trtype": "TCP", 00:10:10.336 "adrfam": "IPv4", 00:10:10.336 "traddr": "10.0.0.1", 00:10:10.336 "trsvcid": "35720" 00:10:10.336 }, 00:10:10.336 "auth": { 00:10:10.336 "state": "completed", 00:10:10.336 "digest": "sha384", 00:10:10.336 "dhgroup": "null" 00:10:10.336 } 00:10:10.336 } 00:10:10.336 ]' 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.336 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.595 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:10.595 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:11.162 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.421 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.680 00:10:11.680 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.680 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.680 13:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.939 { 00:10:11.939 "cntlid": 53, 00:10:11.939 "qid": 0, 00:10:11.939 "state": "enabled", 00:10:11.939 "thread": "nvmf_tgt_poll_group_000", 00:10:11.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:11.939 "listen_address": { 00:10:11.939 "trtype": "TCP", 00:10:11.939 "adrfam": "IPv4", 00:10:11.939 "traddr": "10.0.0.3", 00:10:11.939 "trsvcid": "4420" 00:10:11.939 }, 00:10:11.939 "peer_address": { 00:10:11.939 "trtype": "TCP", 00:10:11.939 "adrfam": "IPv4", 00:10:11.939 "traddr": "10.0.0.1", 00:10:11.939 "trsvcid": "35752" 00:10:11.939 }, 00:10:11.939 "auth": { 00:10:11.939 "state": "completed", 00:10:11.939 "digest": "sha384", 00:10:11.939 "dhgroup": "null" 00:10:11.939 } 00:10:11.939 } 00:10:11.939 ]' 00:10:11.939 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.198 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.456 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:12.456 13:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:13.023 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:13.281 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:13.540 00:10:13.540 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.540 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.540 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.799 { 00:10:13.799 "cntlid": 55, 00:10:13.799 "qid": 0, 00:10:13.799 "state": "enabled", 00:10:13.799 "thread": "nvmf_tgt_poll_group_000", 00:10:13.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:13.799 "listen_address": { 00:10:13.799 "trtype": "TCP", 00:10:13.799 "adrfam": "IPv4", 00:10:13.799 "traddr": "10.0.0.3", 00:10:13.799 "trsvcid": "4420" 00:10:13.799 }, 00:10:13.799 "peer_address": { 00:10:13.799 "trtype": "TCP", 00:10:13.799 "adrfam": "IPv4", 00:10:13.799 "traddr": "10.0.0.1", 00:10:13.799 "trsvcid": "35792" 00:10:13.799 }, 00:10:13.799 "auth": { 00:10:13.799 "state": "completed", 00:10:13.799 "digest": "sha384", 00:10:13.799 "dhgroup": "null" 00:10:13.799 } 00:10:13.799 } 00:10:13.799 ]' 00:10:13.799 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.058 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.317 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:14.317 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:14.885 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.144 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.403 00:10:15.403 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.403 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.403 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.661 { 00:10:15.661 "cntlid": 57, 00:10:15.661 "qid": 0, 00:10:15.661 "state": "enabled", 00:10:15.661 "thread": "nvmf_tgt_poll_group_000", 00:10:15.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:15.661 "listen_address": { 00:10:15.661 "trtype": "TCP", 00:10:15.661 "adrfam": "IPv4", 00:10:15.661 "traddr": "10.0.0.3", 00:10:15.661 "trsvcid": "4420" 00:10:15.661 }, 00:10:15.661 "peer_address": { 00:10:15.661 "trtype": "TCP", 00:10:15.661 "adrfam": "IPv4", 00:10:15.661 "traddr": "10.0.0.1", 00:10:15.661 "trsvcid": "59024" 00:10:15.661 }, 00:10:15.661 "auth": { 00:10:15.661 "state": "completed", 00:10:15.661 "digest": "sha384", 00:10:15.661 "dhgroup": "ffdhe2048" 00:10:15.661 } 00:10:15.661 } 00:10:15.661 ]' 00:10:15.661 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.920 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:15.920 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.920 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:15.920 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.920 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.920 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.920 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.178 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:16.178 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:16.744 13:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.003 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.261 00:10:17.261 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.261 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.261 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.519 { 00:10:17.519 "cntlid": 59, 00:10:17.519 "qid": 0, 00:10:17.519 "state": "enabled", 00:10:17.519 "thread": "nvmf_tgt_poll_group_000", 00:10:17.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:17.519 "listen_address": { 00:10:17.519 "trtype": "TCP", 00:10:17.519 "adrfam": "IPv4", 00:10:17.519 "traddr": "10.0.0.3", 00:10:17.519 "trsvcid": "4420" 00:10:17.519 }, 00:10:17.519 "peer_address": { 00:10:17.519 "trtype": "TCP", 00:10:17.519 "adrfam": "IPv4", 00:10:17.519 "traddr": "10.0.0.1", 00:10:17.519 "trsvcid": "59044" 00:10:17.519 }, 00:10:17.519 "auth": { 00:10:17.519 "state": "completed", 00:10:17.519 "digest": "sha384", 00:10:17.519 "dhgroup": "ffdhe2048" 00:10:17.519 } 00:10:17.519 } 00:10:17.519 ]' 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:17.519 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.777 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:17.777 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.777 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.777 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.777 13:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.035 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:18.035 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:18.602 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:18.860 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:18.860 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.860 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:18.860 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:18.860 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.861 13:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.120 00:10:19.120 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.120 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.120 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.378 { 00:10:19.378 "cntlid": 61, 00:10:19.378 "qid": 0, 00:10:19.378 "state": "enabled", 00:10:19.378 "thread": "nvmf_tgt_poll_group_000", 00:10:19.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:19.378 "listen_address": { 00:10:19.378 "trtype": "TCP", 00:10:19.378 "adrfam": "IPv4", 00:10:19.378 "traddr": "10.0.0.3", 00:10:19.378 "trsvcid": "4420" 00:10:19.378 }, 00:10:19.378 "peer_address": { 00:10:19.378 "trtype": "TCP", 00:10:19.378 "adrfam": "IPv4", 00:10:19.378 "traddr": "10.0.0.1", 00:10:19.378 "trsvcid": "59072" 00:10:19.378 }, 00:10:19.378 "auth": { 00:10:19.378 "state": "completed", 00:10:19.378 "digest": "sha384", 00:10:19.378 "dhgroup": "ffdhe2048" 00:10:19.378 } 00:10:19.378 } 00:10:19.378 ]' 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.378 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.379 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.637 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:19.637 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.571 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.572 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:20.572 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:20.572 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:20.830 00:10:20.830 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.830 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.830 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.398 { 00:10:21.398 "cntlid": 63, 00:10:21.398 "qid": 0, 00:10:21.398 "state": "enabled", 00:10:21.398 "thread": "nvmf_tgt_poll_group_000", 00:10:21.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:21.398 "listen_address": { 00:10:21.398 "trtype": "TCP", 00:10:21.398 "adrfam": "IPv4", 00:10:21.398 "traddr": "10.0.0.3", 00:10:21.398 "trsvcid": "4420" 00:10:21.398 }, 00:10:21.398 "peer_address": { 00:10:21.398 "trtype": "TCP", 00:10:21.398 "adrfam": "IPv4", 00:10:21.398 "traddr": "10.0.0.1", 00:10:21.398 "trsvcid": "59118" 00:10:21.398 }, 00:10:21.398 "auth": { 00:10:21.398 "state": "completed", 00:10:21.398 "digest": "sha384", 00:10:21.398 "dhgroup": "ffdhe2048" 00:10:21.398 } 00:10:21.398 } 00:10:21.398 ]' 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.398 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.657 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:21.657 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:22.225 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.484 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.743 00:10:22.743 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.743 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.743 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.002 { 00:10:23.002 "cntlid": 65, 00:10:23.002 "qid": 0, 00:10:23.002 "state": "enabled", 00:10:23.002 "thread": "nvmf_tgt_poll_group_000", 00:10:23.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:23.002 "listen_address": { 00:10:23.002 "trtype": "TCP", 00:10:23.002 "adrfam": "IPv4", 00:10:23.002 "traddr": "10.0.0.3", 00:10:23.002 "trsvcid": "4420" 00:10:23.002 }, 00:10:23.002 "peer_address": { 00:10:23.002 "trtype": "TCP", 00:10:23.002 "adrfam": "IPv4", 00:10:23.002 "traddr": "10.0.0.1", 00:10:23.002 "trsvcid": "59150" 00:10:23.002 }, 00:10:23.002 "auth": { 00:10:23.002 "state": "completed", 00:10:23.002 "digest": "sha384", 00:10:23.002 "dhgroup": "ffdhe3072" 00:10:23.002 } 00:10:23.002 } 00:10:23.002 ]' 00:10:23.002 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.261 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.520 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:23.520 13:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:24.088 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.347 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.914 00:10:24.914 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.914 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.914 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.174 { 00:10:25.174 "cntlid": 67, 00:10:25.174 "qid": 0, 00:10:25.174 "state": "enabled", 00:10:25.174 "thread": "nvmf_tgt_poll_group_000", 00:10:25.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:25.174 "listen_address": { 00:10:25.174 "trtype": "TCP", 00:10:25.174 "adrfam": "IPv4", 00:10:25.174 "traddr": "10.0.0.3", 00:10:25.174 "trsvcid": "4420" 00:10:25.174 }, 00:10:25.174 "peer_address": { 00:10:25.174 "trtype": "TCP", 00:10:25.174 "adrfam": "IPv4", 00:10:25.174 "traddr": "10.0.0.1", 00:10:25.174 "trsvcid": "37062" 00:10:25.174 }, 00:10:25.174 "auth": { 00:10:25.174 "state": "completed", 00:10:25.174 "digest": "sha384", 00:10:25.174 "dhgroup": "ffdhe3072" 00:10:25.174 } 00:10:25.174 } 00:10:25.174 ]' 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.174 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.433 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:25.433 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:26.000 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.000 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:26.000 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.000 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.259 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.259 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.259 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:26.259 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.518 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.777 00:10:26.777 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.777 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.777 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.036 { 00:10:27.036 "cntlid": 69, 00:10:27.036 "qid": 0, 00:10:27.036 "state": "enabled", 00:10:27.036 "thread": "nvmf_tgt_poll_group_000", 00:10:27.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:27.036 "listen_address": { 00:10:27.036 "trtype": "TCP", 00:10:27.036 "adrfam": "IPv4", 00:10:27.036 "traddr": "10.0.0.3", 00:10:27.036 "trsvcid": "4420" 00:10:27.036 }, 00:10:27.036 "peer_address": { 00:10:27.036 "trtype": "TCP", 00:10:27.036 "adrfam": "IPv4", 00:10:27.036 "traddr": "10.0.0.1", 00:10:27.036 "trsvcid": "37084" 00:10:27.036 }, 00:10:27.036 "auth": { 00:10:27.036 "state": "completed", 00:10:27.036 "digest": "sha384", 00:10:27.036 "dhgroup": "ffdhe3072" 00:10:27.036 } 00:10:27.036 } 00:10:27.036 ]' 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:27.036 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.295 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.295 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.295 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.295 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:27.295 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.320 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.888 00:10:28.888 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.888 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.888 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.148 { 00:10:29.148 "cntlid": 71, 00:10:29.148 "qid": 0, 00:10:29.148 "state": "enabled", 00:10:29.148 "thread": "nvmf_tgt_poll_group_000", 00:10:29.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:29.148 "listen_address": { 00:10:29.148 "trtype": "TCP", 00:10:29.148 "adrfam": "IPv4", 00:10:29.148 "traddr": "10.0.0.3", 00:10:29.148 "trsvcid": "4420" 00:10:29.148 }, 00:10:29.148 "peer_address": { 00:10:29.148 "trtype": "TCP", 00:10:29.148 "adrfam": "IPv4", 00:10:29.148 "traddr": "10.0.0.1", 00:10:29.148 "trsvcid": "37108" 00:10:29.148 }, 00:10:29.148 "auth": { 00:10:29.148 "state": "completed", 00:10:29.148 "digest": "sha384", 00:10:29.148 "dhgroup": "ffdhe3072" 00:10:29.148 } 00:10:29.148 } 00:10:29.148 ]' 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.148 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.406 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:29.407 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:29.974 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.232 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.491 00:10:30.491 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.491 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.491 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.750 { 00:10:30.750 "cntlid": 73, 00:10:30.750 "qid": 0, 00:10:30.750 "state": "enabled", 00:10:30.750 "thread": "nvmf_tgt_poll_group_000", 00:10:30.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:30.750 "listen_address": { 00:10:30.750 "trtype": "TCP", 00:10:30.750 "adrfam": "IPv4", 00:10:30.750 "traddr": "10.0.0.3", 00:10:30.750 "trsvcid": "4420" 00:10:30.750 }, 00:10:30.750 "peer_address": { 00:10:30.750 "trtype": "TCP", 00:10:30.750 "adrfam": "IPv4", 00:10:30.750 "traddr": "10.0.0.1", 00:10:30.750 "trsvcid": "37142" 00:10:30.750 }, 00:10:30.750 "auth": { 00:10:30.750 "state": "completed", 00:10:30.750 "digest": "sha384", 00:10:30.750 "dhgroup": "ffdhe4096" 00:10:30.750 } 00:10:30.750 } 00:10:30.750 ]' 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:30.750 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.009 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:31.009 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.009 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.009 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.009 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.269 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:31.269 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:31.836 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.095 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.096 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.096 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.096 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.355 00:10:32.355 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.355 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.355 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.614 { 00:10:32.614 "cntlid": 75, 00:10:32.614 "qid": 0, 00:10:32.614 "state": "enabled", 00:10:32.614 "thread": "nvmf_tgt_poll_group_000", 00:10:32.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:32.614 "listen_address": { 00:10:32.614 "trtype": "TCP", 00:10:32.614 "adrfam": "IPv4", 00:10:32.614 "traddr": "10.0.0.3", 00:10:32.614 "trsvcid": "4420" 00:10:32.614 }, 00:10:32.614 "peer_address": { 00:10:32.614 "trtype": "TCP", 00:10:32.614 "adrfam": "IPv4", 00:10:32.614 "traddr": "10.0.0.1", 00:10:32.614 "trsvcid": "37180" 00:10:32.614 }, 00:10:32.614 "auth": { 00:10:32.614 "state": "completed", 00:10:32.614 "digest": "sha384", 00:10:32.614 "dhgroup": "ffdhe4096" 00:10:32.614 } 00:10:32.614 } 00:10:32.614 ]' 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.614 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.873 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.132 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:33.132 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:33.699 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.699 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:33.699 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.699 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.699 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.700 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.700 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:33.700 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.958 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.217 00:10:34.217 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.217 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.217 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.477 { 00:10:34.477 "cntlid": 77, 00:10:34.477 "qid": 0, 00:10:34.477 "state": "enabled", 00:10:34.477 "thread": "nvmf_tgt_poll_group_000", 00:10:34.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:34.477 "listen_address": { 00:10:34.477 "trtype": "TCP", 00:10:34.477 "adrfam": "IPv4", 00:10:34.477 "traddr": "10.0.0.3", 00:10:34.477 "trsvcid": "4420" 00:10:34.477 }, 00:10:34.477 "peer_address": { 00:10:34.477 "trtype": "TCP", 00:10:34.477 "adrfam": "IPv4", 00:10:34.477 "traddr": "10.0.0.1", 00:10:34.477 "trsvcid": "37210" 00:10:34.477 }, 00:10:34.477 "auth": { 00:10:34.477 "state": "completed", 00:10:34.477 "digest": "sha384", 00:10:34.477 "dhgroup": "ffdhe4096" 00:10:34.477 } 00:10:34.477 } 00:10:34.477 ]' 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.477 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.743 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:34.743 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:35.313 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.572 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.830 00:10:35.830 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.830 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.830 13:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.089 { 00:10:36.089 "cntlid": 79, 00:10:36.089 "qid": 0, 00:10:36.089 "state": "enabled", 00:10:36.089 "thread": "nvmf_tgt_poll_group_000", 00:10:36.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:36.089 "listen_address": { 00:10:36.089 "trtype": "TCP", 00:10:36.089 "adrfam": "IPv4", 00:10:36.089 "traddr": "10.0.0.3", 00:10:36.089 "trsvcid": "4420" 00:10:36.089 }, 00:10:36.089 "peer_address": { 00:10:36.089 "trtype": "TCP", 00:10:36.089 "adrfam": "IPv4", 00:10:36.089 "traddr": "10.0.0.1", 00:10:36.089 "trsvcid": "52882" 00:10:36.089 }, 00:10:36.089 "auth": { 00:10:36.089 "state": "completed", 00:10:36.089 "digest": "sha384", 00:10:36.089 "dhgroup": "ffdhe4096" 00:10:36.089 } 00:10:36.089 } 00:10:36.089 ]' 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.089 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.348 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.348 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.348 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.348 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.348 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.606 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:36.606 13:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:37.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.435 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.694 00:10:37.694 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.694 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.694 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.953 { 00:10:37.953 "cntlid": 81, 00:10:37.953 "qid": 0, 00:10:37.953 "state": "enabled", 00:10:37.953 "thread": "nvmf_tgt_poll_group_000", 00:10:37.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:37.953 "listen_address": { 00:10:37.953 "trtype": "TCP", 00:10:37.953 "adrfam": "IPv4", 00:10:37.953 "traddr": "10.0.0.3", 00:10:37.953 "trsvcid": "4420" 00:10:37.953 }, 00:10:37.953 "peer_address": { 00:10:37.953 "trtype": "TCP", 00:10:37.953 "adrfam": "IPv4", 00:10:37.953 "traddr": "10.0.0.1", 00:10:37.953 "trsvcid": "52896" 00:10:37.953 }, 00:10:37.953 "auth": { 00:10:37.953 "state": "completed", 00:10:37.953 "digest": "sha384", 00:10:37.953 "dhgroup": "ffdhe6144" 00:10:37.953 } 00:10:37.953 } 00:10:37.953 ]' 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:37.953 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.211 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.212 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.212 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.212 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:38.212 13:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.149 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.717 00:10:39.717 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.717 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.717 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.976 { 00:10:39.976 "cntlid": 83, 00:10:39.976 "qid": 0, 00:10:39.976 "state": "enabled", 00:10:39.976 "thread": "nvmf_tgt_poll_group_000", 00:10:39.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:39.976 "listen_address": { 00:10:39.976 "trtype": "TCP", 00:10:39.976 "adrfam": "IPv4", 00:10:39.976 "traddr": "10.0.0.3", 00:10:39.976 "trsvcid": "4420" 00:10:39.976 }, 00:10:39.976 "peer_address": { 00:10:39.976 "trtype": "TCP", 00:10:39.976 "adrfam": "IPv4", 00:10:39.976 "traddr": "10.0.0.1", 00:10:39.976 "trsvcid": "52918" 00:10:39.976 }, 00:10:39.976 "auth": { 00:10:39.976 "state": "completed", 00:10:39.976 "digest": "sha384", 00:10:39.976 "dhgroup": "ffdhe6144" 00:10:39.976 } 00:10:39.976 } 00:10:39.976 ]' 00:10:39.976 13:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.976 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.234 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:40.234 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:40.800 13:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.059 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.627 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.627 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.627 { 00:10:41.627 "cntlid": 85, 00:10:41.627 "qid": 0, 00:10:41.627 "state": "enabled", 00:10:41.627 "thread": "nvmf_tgt_poll_group_000", 00:10:41.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:41.627 "listen_address": { 00:10:41.627 "trtype": "TCP", 00:10:41.627 "adrfam": "IPv4", 00:10:41.627 "traddr": "10.0.0.3", 00:10:41.627 "trsvcid": "4420" 00:10:41.627 }, 00:10:41.627 "peer_address": { 00:10:41.627 "trtype": "TCP", 00:10:41.627 "adrfam": "IPv4", 00:10:41.627 "traddr": "10.0.0.1", 00:10:41.627 "trsvcid": "52960" 00:10:41.627 }, 00:10:41.627 "auth": { 00:10:41.627 "state": "completed", 00:10:41.627 "digest": "sha384", 00:10:41.627 "dhgroup": "ffdhe6144" 00:10:41.627 } 00:10:41.627 } 00:10:41.628 ]' 00:10:41.628 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.886 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.145 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:42.145 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:42.713 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:42.971 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.972 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.538 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.538 { 00:10:43.538 "cntlid": 87, 00:10:43.538 "qid": 0, 00:10:43.538 "state": "enabled", 00:10:43.538 "thread": "nvmf_tgt_poll_group_000", 00:10:43.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:43.538 "listen_address": { 00:10:43.538 "trtype": "TCP", 00:10:43.538 "adrfam": "IPv4", 00:10:43.538 "traddr": "10.0.0.3", 00:10:43.538 "trsvcid": "4420" 00:10:43.538 }, 00:10:43.538 "peer_address": { 00:10:43.538 "trtype": "TCP", 00:10:43.538 "adrfam": "IPv4", 00:10:43.538 "traddr": "10.0.0.1", 00:10:43.538 "trsvcid": "53002" 00:10:43.538 }, 00:10:43.538 "auth": { 00:10:43.538 "state": "completed", 00:10:43.538 "digest": "sha384", 00:10:43.538 "dhgroup": "ffdhe6144" 00:10:43.538 } 00:10:43.538 } 00:10:43.538 ]' 00:10:43.538 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.796 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.054 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:44.054 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:44.991 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.991 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.558 00:10:45.558 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.558 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.558 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.817 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.817 { 00:10:45.817 "cntlid": 89, 00:10:45.817 "qid": 0, 00:10:45.817 "state": "enabled", 00:10:45.817 "thread": "nvmf_tgt_poll_group_000", 00:10:45.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:45.818 "listen_address": { 00:10:45.818 "trtype": "TCP", 00:10:45.818 "adrfam": "IPv4", 00:10:45.818 "traddr": "10.0.0.3", 00:10:45.818 "trsvcid": "4420" 00:10:45.818 }, 00:10:45.818 "peer_address": { 00:10:45.818 "trtype": "TCP", 00:10:45.818 "adrfam": "IPv4", 00:10:45.818 "traddr": "10.0.0.1", 00:10:45.818 "trsvcid": "45580" 00:10:45.818 }, 00:10:45.818 "auth": { 00:10:45.818 "state": "completed", 00:10:45.818 "digest": "sha384", 00:10:45.818 "dhgroup": "ffdhe8192" 00:10:45.818 } 00:10:45.818 } 00:10:45.818 ]' 00:10:45.818 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.818 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.818 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.818 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:45.818 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.818 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.818 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.818 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.385 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:46.385 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:46.952 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.211 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.779 00:10:47.779 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.780 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.039 { 00:10:48.039 "cntlid": 91, 00:10:48.039 "qid": 0, 00:10:48.039 "state": "enabled", 00:10:48.039 "thread": "nvmf_tgt_poll_group_000", 00:10:48.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:48.039 "listen_address": { 00:10:48.039 "trtype": "TCP", 00:10:48.039 "adrfam": "IPv4", 00:10:48.039 "traddr": "10.0.0.3", 00:10:48.039 "trsvcid": "4420" 00:10:48.039 }, 00:10:48.039 "peer_address": { 00:10:48.039 "trtype": "TCP", 00:10:48.039 "adrfam": "IPv4", 00:10:48.039 "traddr": "10.0.0.1", 00:10:48.039 "trsvcid": "45600" 00:10:48.039 }, 00:10:48.039 "auth": { 00:10:48.039 "state": "completed", 00:10:48.039 "digest": "sha384", 00:10:48.039 "dhgroup": "ffdhe8192" 00:10:48.039 } 00:10:48.039 } 00:10:48.039 ]' 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.039 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.299 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:48.299 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:48.867 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.435 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.015 00:10:50.015 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.015 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.015 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.303 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.303 { 00:10:50.303 "cntlid": 93, 00:10:50.303 "qid": 0, 00:10:50.303 "state": "enabled", 00:10:50.303 "thread": "nvmf_tgt_poll_group_000", 00:10:50.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:50.303 "listen_address": { 00:10:50.303 "trtype": "TCP", 00:10:50.303 "adrfam": "IPv4", 00:10:50.303 "traddr": "10.0.0.3", 00:10:50.303 "trsvcid": "4420" 00:10:50.303 }, 00:10:50.303 "peer_address": { 00:10:50.303 "trtype": "TCP", 00:10:50.303 "adrfam": "IPv4", 00:10:50.303 "traddr": "10.0.0.1", 00:10:50.303 "trsvcid": "45614" 00:10:50.303 }, 00:10:50.303 "auth": { 00:10:50.303 "state": "completed", 00:10:50.303 "digest": "sha384", 00:10:50.303 "dhgroup": "ffdhe8192" 00:10:50.303 } 00:10:50.303 } 00:10:50.304 ]' 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.304 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.568 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:50.569 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:51.136 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.395 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.331 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.331 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.331 { 00:10:52.331 "cntlid": 95, 00:10:52.331 "qid": 0, 00:10:52.331 "state": "enabled", 00:10:52.331 "thread": "nvmf_tgt_poll_group_000", 00:10:52.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:52.332 "listen_address": { 00:10:52.332 "trtype": "TCP", 00:10:52.332 "adrfam": "IPv4", 00:10:52.332 "traddr": "10.0.0.3", 00:10:52.332 "trsvcid": "4420" 00:10:52.332 }, 00:10:52.332 "peer_address": { 00:10:52.332 "trtype": "TCP", 00:10:52.332 "adrfam": "IPv4", 00:10:52.332 "traddr": "10.0.0.1", 00:10:52.332 "trsvcid": "45648" 00:10:52.332 }, 00:10:52.332 "auth": { 00:10:52.332 "state": "completed", 00:10:52.332 "digest": "sha384", 00:10:52.332 "dhgroup": "ffdhe8192" 00:10:52.332 } 00:10:52.332 } 00:10:52.332 ]' 00:10:52.332 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.332 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.332 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.590 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:52.590 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.590 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.590 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.590 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.849 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:52.849 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:53.417 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.676 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.935 00:10:53.936 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.936 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.936 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.195 { 00:10:54.195 "cntlid": 97, 00:10:54.195 "qid": 0, 00:10:54.195 "state": "enabled", 00:10:54.195 "thread": "nvmf_tgt_poll_group_000", 00:10:54.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:54.195 "listen_address": { 00:10:54.195 "trtype": "TCP", 00:10:54.195 "adrfam": "IPv4", 00:10:54.195 "traddr": "10.0.0.3", 00:10:54.195 "trsvcid": "4420" 00:10:54.195 }, 00:10:54.195 "peer_address": { 00:10:54.195 "trtype": "TCP", 00:10:54.195 "adrfam": "IPv4", 00:10:54.195 "traddr": "10.0.0.1", 00:10:54.195 "trsvcid": "45654" 00:10:54.195 }, 00:10:54.195 "auth": { 00:10:54.195 "state": "completed", 00:10:54.195 "digest": "sha512", 00:10:54.195 "dhgroup": "null" 00:10:54.195 } 00:10:54.195 } 00:10:54.195 ]' 00:10:54.195 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.454 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.713 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:54.713 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:55.281 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.540 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.799 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.057 { 00:10:56.057 "cntlid": 99, 00:10:56.057 "qid": 0, 00:10:56.057 "state": "enabled", 00:10:56.057 "thread": "nvmf_tgt_poll_group_000", 00:10:56.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:56.057 "listen_address": { 00:10:56.057 "trtype": "TCP", 00:10:56.057 "adrfam": "IPv4", 00:10:56.057 "traddr": "10.0.0.3", 00:10:56.057 "trsvcid": "4420" 00:10:56.057 }, 00:10:56.057 "peer_address": { 00:10:56.057 "trtype": "TCP", 00:10:56.057 "adrfam": "IPv4", 00:10:56.057 "traddr": "10.0.0.1", 00:10:56.057 "trsvcid": "47592" 00:10:56.057 }, 00:10:56.057 "auth": { 00:10:56.057 "state": "completed", 00:10:56.057 "digest": "sha512", 00:10:56.057 "dhgroup": "null" 00:10:56.057 } 00:10:56.057 } 00:10:56.057 ]' 00:10:56.057 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.315 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.573 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:56.573 13:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:57.140 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.399 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.966 00:10:57.966 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.966 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.966 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.225 { 00:10:58.225 "cntlid": 101, 00:10:58.225 "qid": 0, 00:10:58.225 "state": "enabled", 00:10:58.225 "thread": "nvmf_tgt_poll_group_000", 00:10:58.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:10:58.225 "listen_address": { 00:10:58.225 "trtype": "TCP", 00:10:58.225 "adrfam": "IPv4", 00:10:58.225 "traddr": "10.0.0.3", 00:10:58.225 "trsvcid": "4420" 00:10:58.225 }, 00:10:58.225 "peer_address": { 00:10:58.225 "trtype": "TCP", 00:10:58.225 "adrfam": "IPv4", 00:10:58.225 "traddr": "10.0.0.1", 00:10:58.225 "trsvcid": "47630" 00:10:58.225 }, 00:10:58.225 "auth": { 00:10:58.225 "state": "completed", 00:10:58.225 "digest": "sha512", 00:10:58.225 "dhgroup": "null" 00:10:58.225 } 00:10:58.225 } 00:10:58.225 ]' 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.225 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.484 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:58.484 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:59.050 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.308 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.567 00:10:59.567 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.567 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.567 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.134 { 00:11:00.134 "cntlid": 103, 00:11:00.134 "qid": 0, 00:11:00.134 "state": "enabled", 00:11:00.134 "thread": "nvmf_tgt_poll_group_000", 00:11:00.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:00.134 "listen_address": { 00:11:00.134 "trtype": "TCP", 00:11:00.134 "adrfam": "IPv4", 00:11:00.134 "traddr": "10.0.0.3", 00:11:00.134 "trsvcid": "4420" 00:11:00.134 }, 00:11:00.134 "peer_address": { 00:11:00.134 "trtype": "TCP", 00:11:00.134 "adrfam": "IPv4", 00:11:00.134 "traddr": "10.0.0.1", 00:11:00.134 "trsvcid": "47662" 00:11:00.134 }, 00:11:00.134 "auth": { 00:11:00.134 "state": "completed", 00:11:00.134 "digest": "sha512", 00:11:00.134 "dhgroup": "null" 00:11:00.134 } 00:11:00.134 } 00:11:00.134 ]' 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.134 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.392 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:00.392 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:00.976 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.234 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.235 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.493 00:11:01.493 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.493 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.493 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.751 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.752 { 00:11:01.752 "cntlid": 105, 00:11:01.752 "qid": 0, 00:11:01.752 "state": "enabled", 00:11:01.752 "thread": "nvmf_tgt_poll_group_000", 00:11:01.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:01.752 "listen_address": { 00:11:01.752 "trtype": "TCP", 00:11:01.752 "adrfam": "IPv4", 00:11:01.752 "traddr": "10.0.0.3", 00:11:01.752 "trsvcid": "4420" 00:11:01.752 }, 00:11:01.752 "peer_address": { 00:11:01.752 "trtype": "TCP", 00:11:01.752 "adrfam": "IPv4", 00:11:01.752 "traddr": "10.0.0.1", 00:11:01.752 "trsvcid": "47692" 00:11:01.752 }, 00:11:01.752 "auth": { 00:11:01.752 "state": "completed", 00:11:01.752 "digest": "sha512", 00:11:01.752 "dhgroup": "ffdhe2048" 00:11:01.752 } 00:11:01.752 } 00:11:01.752 ]' 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.752 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.010 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.011 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.011 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.011 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:02.011 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:02.578 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.145 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.404 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.404 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.662 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.662 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.662 { 00:11:03.663 "cntlid": 107, 00:11:03.663 "qid": 0, 00:11:03.663 "state": "enabled", 00:11:03.663 "thread": "nvmf_tgt_poll_group_000", 00:11:03.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:03.663 "listen_address": { 00:11:03.663 "trtype": "TCP", 00:11:03.663 "adrfam": "IPv4", 00:11:03.663 "traddr": "10.0.0.3", 00:11:03.663 "trsvcid": "4420" 00:11:03.663 }, 00:11:03.663 "peer_address": { 00:11:03.663 "trtype": "TCP", 00:11:03.663 "adrfam": "IPv4", 00:11:03.663 "traddr": "10.0.0.1", 00:11:03.663 "trsvcid": "47708" 00:11:03.663 }, 00:11:03.663 "auth": { 00:11:03.663 "state": "completed", 00:11:03.663 "digest": "sha512", 00:11:03.663 "dhgroup": "ffdhe2048" 00:11:03.663 } 00:11:03.663 } 00:11:03.663 ]' 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.663 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.921 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:03.921 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:04.489 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.748 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.007 00:11:05.007 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.007 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.007 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.274 { 00:11:05.274 "cntlid": 109, 00:11:05.274 "qid": 0, 00:11:05.274 "state": "enabled", 00:11:05.274 "thread": "nvmf_tgt_poll_group_000", 00:11:05.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:05.274 "listen_address": { 00:11:05.274 "trtype": "TCP", 00:11:05.274 "adrfam": "IPv4", 00:11:05.274 "traddr": "10.0.0.3", 00:11:05.274 "trsvcid": "4420" 00:11:05.274 }, 00:11:05.274 "peer_address": { 00:11:05.274 "trtype": "TCP", 00:11:05.274 "adrfam": "IPv4", 00:11:05.274 "traddr": "10.0.0.1", 00:11:05.274 "trsvcid": "34676" 00:11:05.274 }, 00:11:05.274 "auth": { 00:11:05.274 "state": "completed", 00:11:05.274 "digest": "sha512", 00:11:05.274 "dhgroup": "ffdhe2048" 00:11:05.274 } 00:11:05.274 } 00:11:05.274 ]' 00:11:05.274 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.553 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.821 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:05.821 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:06.388 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:06.389 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.647 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.648 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.906 00:11:06.906 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.906 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.906 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.165 { 00:11:07.165 "cntlid": 111, 00:11:07.165 "qid": 0, 00:11:07.165 "state": "enabled", 00:11:07.165 "thread": "nvmf_tgt_poll_group_000", 00:11:07.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:07.165 "listen_address": { 00:11:07.165 "trtype": "TCP", 00:11:07.165 "adrfam": "IPv4", 00:11:07.165 "traddr": "10.0.0.3", 00:11:07.165 "trsvcid": "4420" 00:11:07.165 }, 00:11:07.165 "peer_address": { 00:11:07.165 "trtype": "TCP", 00:11:07.165 "adrfam": "IPv4", 00:11:07.165 "traddr": "10.0.0.1", 00:11:07.165 "trsvcid": "34700" 00:11:07.165 }, 00:11:07.165 "auth": { 00:11:07.165 "state": "completed", 00:11:07.165 "digest": "sha512", 00:11:07.165 "dhgroup": "ffdhe2048" 00:11:07.165 } 00:11:07.165 } 00:11:07.165 ]' 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.165 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.424 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:07.424 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:07.991 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.250 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.507 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.766 { 00:11:08.766 "cntlid": 113, 00:11:08.766 "qid": 0, 00:11:08.766 "state": "enabled", 00:11:08.766 "thread": "nvmf_tgt_poll_group_000", 00:11:08.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:08.766 "listen_address": { 00:11:08.766 "trtype": "TCP", 00:11:08.766 "adrfam": "IPv4", 00:11:08.766 "traddr": "10.0.0.3", 00:11:08.766 "trsvcid": "4420" 00:11:08.766 }, 00:11:08.766 "peer_address": { 00:11:08.766 "trtype": "TCP", 00:11:08.766 "adrfam": "IPv4", 00:11:08.766 "traddr": "10.0.0.1", 00:11:08.766 "trsvcid": "34720" 00:11:08.766 }, 00:11:08.766 "auth": { 00:11:08.766 "state": "completed", 00:11:08.766 "digest": "sha512", 00:11:08.766 "dhgroup": "ffdhe3072" 00:11:08.766 } 00:11:08.766 } 00:11:08.766 ]' 00:11:08.766 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.024 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:09.024 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.024 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.024 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.025 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.025 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.025 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.283 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:09.283 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:09.851 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.109 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.110 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.368 00:11:10.368 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.368 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.368 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.627 { 00:11:10.627 "cntlid": 115, 00:11:10.627 "qid": 0, 00:11:10.627 "state": "enabled", 00:11:10.627 "thread": "nvmf_tgt_poll_group_000", 00:11:10.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:10.627 "listen_address": { 00:11:10.627 "trtype": "TCP", 00:11:10.627 "adrfam": "IPv4", 00:11:10.627 "traddr": "10.0.0.3", 00:11:10.627 "trsvcid": "4420" 00:11:10.627 }, 00:11:10.627 "peer_address": { 00:11:10.627 "trtype": "TCP", 00:11:10.627 "adrfam": "IPv4", 00:11:10.627 "traddr": "10.0.0.1", 00:11:10.627 "trsvcid": "34754" 00:11:10.627 }, 00:11:10.627 "auth": { 00:11:10.627 "state": "completed", 00:11:10.627 "digest": "sha512", 00:11:10.627 "dhgroup": "ffdhe3072" 00:11:10.627 } 00:11:10.627 } 00:11:10.627 ]' 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.627 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.886 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.886 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.886 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.145 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:11.145 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.713 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.971 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.972 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.972 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.972 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.230 00:11:12.230 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.230 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.230 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.489 { 00:11:12.489 "cntlid": 117, 00:11:12.489 "qid": 0, 00:11:12.489 "state": "enabled", 00:11:12.489 "thread": "nvmf_tgt_poll_group_000", 00:11:12.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:12.489 "listen_address": { 00:11:12.489 "trtype": "TCP", 00:11:12.489 "adrfam": "IPv4", 00:11:12.489 "traddr": "10.0.0.3", 00:11:12.489 "trsvcid": "4420" 00:11:12.489 }, 00:11:12.489 "peer_address": { 00:11:12.489 "trtype": "TCP", 00:11:12.489 "adrfam": "IPv4", 00:11:12.489 "traddr": "10.0.0.1", 00:11:12.489 "trsvcid": "34780" 00:11:12.489 }, 00:11:12.489 "auth": { 00:11:12.489 "state": "completed", 00:11:12.489 "digest": "sha512", 00:11:12.489 "dhgroup": "ffdhe3072" 00:11:12.489 } 00:11:12.489 } 00:11:12.489 ]' 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.489 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.748 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:12.748 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:13.314 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:13.572 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.573 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.831 00:11:13.831 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.831 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.831 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.090 { 00:11:14.090 "cntlid": 119, 00:11:14.090 "qid": 0, 00:11:14.090 "state": "enabled", 00:11:14.090 "thread": "nvmf_tgt_poll_group_000", 00:11:14.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:14.090 "listen_address": { 00:11:14.090 "trtype": "TCP", 00:11:14.090 "adrfam": "IPv4", 00:11:14.090 "traddr": "10.0.0.3", 00:11:14.090 "trsvcid": "4420" 00:11:14.090 }, 00:11:14.090 "peer_address": { 00:11:14.090 "trtype": "TCP", 00:11:14.090 "adrfam": "IPv4", 00:11:14.090 "traddr": "10.0.0.1", 00:11:14.090 "trsvcid": "34818" 00:11:14.090 }, 00:11:14.090 "auth": { 00:11:14.090 "state": "completed", 00:11:14.090 "digest": "sha512", 00:11:14.090 "dhgroup": "ffdhe3072" 00:11:14.090 } 00:11:14.090 } 00:11:14.090 ]' 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.090 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.091 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.349 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.349 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.349 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.349 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:14.349 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:14.916 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.176 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.435 00:11:15.435 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.435 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.435 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.693 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.693 { 00:11:15.693 "cntlid": 121, 00:11:15.693 "qid": 0, 00:11:15.693 "state": "enabled", 00:11:15.693 "thread": "nvmf_tgt_poll_group_000", 00:11:15.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:15.693 "listen_address": { 00:11:15.693 "trtype": "TCP", 00:11:15.693 "adrfam": "IPv4", 00:11:15.693 "traddr": "10.0.0.3", 00:11:15.693 "trsvcid": "4420" 00:11:15.693 }, 00:11:15.693 "peer_address": { 00:11:15.693 "trtype": "TCP", 00:11:15.693 "adrfam": "IPv4", 00:11:15.693 "traddr": "10.0.0.1", 00:11:15.693 "trsvcid": "33114" 00:11:15.693 }, 00:11:15.693 "auth": { 00:11:15.693 "state": "completed", 00:11:15.693 "digest": "sha512", 00:11:15.693 "dhgroup": "ffdhe4096" 00:11:15.693 } 00:11:15.694 } 00:11:15.694 ]' 00:11:15.694 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.952 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:15.952 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.952 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.952 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.952 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.952 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.952 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.211 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:16.211 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:16.776 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.034 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.292 00:11:17.292 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.292 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.292 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.551 { 00:11:17.551 "cntlid": 123, 00:11:17.551 "qid": 0, 00:11:17.551 "state": "enabled", 00:11:17.551 "thread": "nvmf_tgt_poll_group_000", 00:11:17.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:17.551 "listen_address": { 00:11:17.551 "trtype": "TCP", 00:11:17.551 "adrfam": "IPv4", 00:11:17.551 "traddr": "10.0.0.3", 00:11:17.551 "trsvcid": "4420" 00:11:17.551 }, 00:11:17.551 "peer_address": { 00:11:17.551 "trtype": "TCP", 00:11:17.551 "adrfam": "IPv4", 00:11:17.551 "traddr": "10.0.0.1", 00:11:17.551 "trsvcid": "33126" 00:11:17.551 }, 00:11:17.551 "auth": { 00:11:17.551 "state": "completed", 00:11:17.551 "digest": "sha512", 00:11:17.551 "dhgroup": "ffdhe4096" 00:11:17.551 } 00:11:17.551 } 00:11:17.551 ]' 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:17.551 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.809 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.809 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.809 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.809 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.809 13:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.098 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:18.098 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:18.664 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.664 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:18.664 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.664 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.665 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.665 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.665 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:18.665 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.923 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.181 00:11:19.182 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.182 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.182 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.440 { 00:11:19.440 "cntlid": 125, 00:11:19.440 "qid": 0, 00:11:19.440 "state": "enabled", 00:11:19.440 "thread": "nvmf_tgt_poll_group_000", 00:11:19.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:19.440 "listen_address": { 00:11:19.440 "trtype": "TCP", 00:11:19.440 "adrfam": "IPv4", 00:11:19.440 "traddr": "10.0.0.3", 00:11:19.440 "trsvcid": "4420" 00:11:19.440 }, 00:11:19.440 "peer_address": { 00:11:19.440 "trtype": "TCP", 00:11:19.440 "adrfam": "IPv4", 00:11:19.440 "traddr": "10.0.0.1", 00:11:19.440 "trsvcid": "33172" 00:11:19.440 }, 00:11:19.440 "auth": { 00:11:19.440 "state": "completed", 00:11:19.440 "digest": "sha512", 00:11:19.440 "dhgroup": "ffdhe4096" 00:11:19.440 } 00:11:19.440 } 00:11:19.440 ]' 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.440 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.699 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.699 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.699 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.957 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:19.957 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.522 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.089 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.089 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.089 { 00:11:21.089 "cntlid": 127, 00:11:21.089 "qid": 0, 00:11:21.089 "state": "enabled", 00:11:21.089 "thread": "nvmf_tgt_poll_group_000", 00:11:21.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:21.089 "listen_address": { 00:11:21.089 "trtype": "TCP", 00:11:21.089 "adrfam": "IPv4", 00:11:21.089 "traddr": "10.0.0.3", 00:11:21.089 "trsvcid": "4420" 00:11:21.089 }, 00:11:21.089 "peer_address": { 00:11:21.089 "trtype": "TCP", 00:11:21.089 "adrfam": "IPv4", 00:11:21.089 "traddr": "10.0.0.1", 00:11:21.089 "trsvcid": "33198" 00:11:21.089 }, 00:11:21.089 "auth": { 00:11:21.089 "state": "completed", 00:11:21.089 "digest": "sha512", 00:11:21.089 "dhgroup": "ffdhe4096" 00:11:21.089 } 00:11:21.089 } 00:11:21.089 ]' 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.350 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.609 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:21.609 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:22.177 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.435 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.694 00:11:22.694 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.694 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.694 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.952 { 00:11:22.952 "cntlid": 129, 00:11:22.952 "qid": 0, 00:11:22.952 "state": "enabled", 00:11:22.952 "thread": "nvmf_tgt_poll_group_000", 00:11:22.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:22.952 "listen_address": { 00:11:22.952 "trtype": "TCP", 00:11:22.952 "adrfam": "IPv4", 00:11:22.952 "traddr": "10.0.0.3", 00:11:22.952 "trsvcid": "4420" 00:11:22.952 }, 00:11:22.952 "peer_address": { 00:11:22.952 "trtype": "TCP", 00:11:22.952 "adrfam": "IPv4", 00:11:22.952 "traddr": "10.0.0.1", 00:11:22.952 "trsvcid": "33220" 00:11:22.952 }, 00:11:22.952 "auth": { 00:11:22.952 "state": "completed", 00:11:22.952 "digest": "sha512", 00:11:22.952 "dhgroup": "ffdhe6144" 00:11:22.952 } 00:11:22.952 } 00:11:22.952 ]' 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:22.952 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.211 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.211 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.211 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.211 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.211 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.469 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:23.469 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.036 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:24.037 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.295 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.296 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.296 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.554 00:11:24.554 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.554 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.554 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.814 { 00:11:24.814 "cntlid": 131, 00:11:24.814 "qid": 0, 00:11:24.814 "state": "enabled", 00:11:24.814 "thread": "nvmf_tgt_poll_group_000", 00:11:24.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:24.814 "listen_address": { 00:11:24.814 "trtype": "TCP", 00:11:24.814 "adrfam": "IPv4", 00:11:24.814 "traddr": "10.0.0.3", 00:11:24.814 "trsvcid": "4420" 00:11:24.814 }, 00:11:24.814 "peer_address": { 00:11:24.814 "trtype": "TCP", 00:11:24.814 "adrfam": "IPv4", 00:11:24.814 "traddr": "10.0.0.1", 00:11:24.814 "trsvcid": "33254" 00:11:24.814 }, 00:11:24.814 "auth": { 00:11:24.814 "state": "completed", 00:11:24.814 "digest": "sha512", 00:11:24.814 "dhgroup": "ffdhe6144" 00:11:24.814 } 00:11:24.814 } 00:11:24.814 ]' 00:11:24.814 13:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.072 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.072 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.072 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.072 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.072 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.073 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.073 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.331 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:25.331 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:25.900 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.159 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.418 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.418 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.418 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.418 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.676 00:11:26.676 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.676 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.676 13:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.935 { 00:11:26.935 "cntlid": 133, 00:11:26.935 "qid": 0, 00:11:26.935 "state": "enabled", 00:11:26.935 "thread": "nvmf_tgt_poll_group_000", 00:11:26.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:26.935 "listen_address": { 00:11:26.935 "trtype": "TCP", 00:11:26.935 "adrfam": "IPv4", 00:11:26.935 "traddr": "10.0.0.3", 00:11:26.935 "trsvcid": "4420" 00:11:26.935 }, 00:11:26.935 "peer_address": { 00:11:26.935 "trtype": "TCP", 00:11:26.935 "adrfam": "IPv4", 00:11:26.935 "traddr": "10.0.0.1", 00:11:26.935 "trsvcid": "55440" 00:11:26.935 }, 00:11:26.935 "auth": { 00:11:26.935 "state": "completed", 00:11:26.935 "digest": "sha512", 00:11:26.935 "dhgroup": "ffdhe6144" 00:11:26.935 } 00:11:26.935 } 00:11:26.935 ]' 00:11:26.935 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.195 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.454 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:27.454 13:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:28.023 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.282 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.541 00:11:28.541 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.541 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.541 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.800 { 00:11:28.800 "cntlid": 135, 00:11:28.800 "qid": 0, 00:11:28.800 "state": "enabled", 00:11:28.800 "thread": "nvmf_tgt_poll_group_000", 00:11:28.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:28.800 "listen_address": { 00:11:28.800 "trtype": "TCP", 00:11:28.800 "adrfam": "IPv4", 00:11:28.800 "traddr": "10.0.0.3", 00:11:28.800 "trsvcid": "4420" 00:11:28.800 }, 00:11:28.800 "peer_address": { 00:11:28.800 "trtype": "TCP", 00:11:28.800 "adrfam": "IPv4", 00:11:28.800 "traddr": "10.0.0.1", 00:11:28.800 "trsvcid": "55462" 00:11:28.800 }, 00:11:28.800 "auth": { 00:11:28.800 "state": "completed", 00:11:28.800 "digest": "sha512", 00:11:28.800 "dhgroup": "ffdhe6144" 00:11:28.800 } 00:11:28.800 } 00:11:28.800 ]' 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.800 13:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:28.800 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.059 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.059 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.059 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.059 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.059 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.318 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:29.318 13:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:29.885 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.143 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.710 00:11:30.710 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.710 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.710 13:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.969 { 00:11:30.969 "cntlid": 137, 00:11:30.969 "qid": 0, 00:11:30.969 "state": "enabled", 00:11:30.969 "thread": "nvmf_tgt_poll_group_000", 00:11:30.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:30.969 "listen_address": { 00:11:30.969 "trtype": "TCP", 00:11:30.969 "adrfam": "IPv4", 00:11:30.969 "traddr": "10.0.0.3", 00:11:30.969 "trsvcid": "4420" 00:11:30.969 }, 00:11:30.969 "peer_address": { 00:11:30.969 "trtype": "TCP", 00:11:30.969 "adrfam": "IPv4", 00:11:30.969 "traddr": "10.0.0.1", 00:11:30.969 "trsvcid": "55482" 00:11:30.969 }, 00:11:30.969 "auth": { 00:11:30.969 "state": "completed", 00:11:30.969 "digest": "sha512", 00:11:30.969 "dhgroup": "ffdhe8192" 00:11:30.969 } 00:11:30.969 } 00:11:30.969 ]' 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:30.969 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.228 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:31.228 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.228 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.228 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.228 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.487 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:31.487 13:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:32.054 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.313 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.314 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.314 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.314 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.314 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.314 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.882 00:11:32.882 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.882 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.882 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.141 { 00:11:33.141 "cntlid": 139, 00:11:33.141 "qid": 0, 00:11:33.141 "state": "enabled", 00:11:33.141 "thread": "nvmf_tgt_poll_group_000", 00:11:33.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:33.141 "listen_address": { 00:11:33.141 "trtype": "TCP", 00:11:33.141 "adrfam": "IPv4", 00:11:33.141 "traddr": "10.0.0.3", 00:11:33.141 "trsvcid": "4420" 00:11:33.141 }, 00:11:33.141 "peer_address": { 00:11:33.141 "trtype": "TCP", 00:11:33.141 "adrfam": "IPv4", 00:11:33.141 "traddr": "10.0.0.1", 00:11:33.141 "trsvcid": "55508" 00:11:33.141 }, 00:11:33.141 "auth": { 00:11:33.141 "state": "completed", 00:11:33.141 "digest": "sha512", 00:11:33.141 "dhgroup": "ffdhe8192" 00:11:33.141 } 00:11:33.141 } 00:11:33.141 ]' 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.141 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.709 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:33.709 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: --dhchap-ctrl-secret DHHC-1:02:YTljYmNiYzY3OTVjMzVlZWY4NTYzYmY2MDJkODZmODNkOWUyZThiMWE0YjBmYjAzKO4RYQ==: 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.277 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.536 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.536 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.536 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.536 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.104 00:11:35.104 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.104 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.104 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.363 { 00:11:35.363 "cntlid": 141, 00:11:35.363 "qid": 0, 00:11:35.363 "state": "enabled", 00:11:35.363 "thread": "nvmf_tgt_poll_group_000", 00:11:35.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:35.363 "listen_address": { 00:11:35.363 "trtype": "TCP", 00:11:35.363 "adrfam": "IPv4", 00:11:35.363 "traddr": "10.0.0.3", 00:11:35.363 "trsvcid": "4420" 00:11:35.363 }, 00:11:35.363 "peer_address": { 00:11:35.363 "trtype": "TCP", 00:11:35.363 "adrfam": "IPv4", 00:11:35.363 "traddr": "10.0.0.1", 00:11:35.363 "trsvcid": "53968" 00:11:35.363 }, 00:11:35.363 "auth": { 00:11:35.363 "state": "completed", 00:11:35.363 "digest": "sha512", 00:11:35.363 "dhgroup": "ffdhe8192" 00:11:35.363 } 00:11:35.363 } 00:11:35.363 ]' 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.363 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.622 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:35.622 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:01:ZGExNzEyMmI4M2FkYmFkNzJjMTFmYzVjMzI4ZmI0MjV5OMHX: 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.559 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.127 00:11:37.127 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.127 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.127 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.386 { 00:11:37.386 "cntlid": 143, 00:11:37.386 "qid": 0, 00:11:37.386 "state": "enabled", 00:11:37.386 "thread": "nvmf_tgt_poll_group_000", 00:11:37.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:37.386 "listen_address": { 00:11:37.386 "trtype": "TCP", 00:11:37.386 "adrfam": "IPv4", 00:11:37.386 "traddr": "10.0.0.3", 00:11:37.386 "trsvcid": "4420" 00:11:37.386 }, 00:11:37.386 "peer_address": { 00:11:37.386 "trtype": "TCP", 00:11:37.386 "adrfam": "IPv4", 00:11:37.386 "traddr": "10.0.0.1", 00:11:37.386 "trsvcid": "54002" 00:11:37.386 }, 00:11:37.386 "auth": { 00:11:37.386 "state": "completed", 00:11:37.386 "digest": "sha512", 00:11:37.386 "dhgroup": "ffdhe8192" 00:11:37.386 } 00:11:37.386 } 00:11:37.386 ]' 00:11:37.386 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.646 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.905 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:37.905 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:38.472 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.472 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:38.472 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.472 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.473 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.040 00:11:39.040 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.040 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.040 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.299 { 00:11:39.299 "cntlid": 145, 00:11:39.299 "qid": 0, 00:11:39.299 "state": "enabled", 00:11:39.299 "thread": "nvmf_tgt_poll_group_000", 00:11:39.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:39.299 "listen_address": { 00:11:39.299 "trtype": "TCP", 00:11:39.299 "adrfam": "IPv4", 00:11:39.299 "traddr": "10.0.0.3", 00:11:39.299 "trsvcid": "4420" 00:11:39.299 }, 00:11:39.299 "peer_address": { 00:11:39.299 "trtype": "TCP", 00:11:39.299 "adrfam": "IPv4", 00:11:39.299 "traddr": "10.0.0.1", 00:11:39.299 "trsvcid": "54030" 00:11:39.299 }, 00:11:39.299 "auth": { 00:11:39.299 "state": "completed", 00:11:39.299 "digest": "sha512", 00:11:39.299 "dhgroup": "ffdhe8192" 00:11:39.299 } 00:11:39.299 } 00:11:39.299 ]' 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.299 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.558 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:39.558 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.558 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.558 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.558 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.817 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:39.817 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:00:MTlkNjRjNDlhNDlkNjIxOWRiYzQxODM1NjljNWM2ZGZkZTBiODViMzc4OWQ2MDFlrGTNeA==: --dhchap-ctrl-secret DHHC-1:03:MmVjNjA4OWNiYWRjYzc0NzdjOTU2M2Y1ZGU2YTVhZjgxMTY0NTE1ZTAxNDEwMzMyYzIzNDU3NjI3ZDJiNzA4OJqnSKk=: 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:40.400 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:40.967 request: 00:11:40.967 { 00:11:40.967 "name": "nvme0", 00:11:40.967 "trtype": "tcp", 00:11:40.967 "traddr": "10.0.0.3", 00:11:40.967 "adrfam": "ipv4", 00:11:40.967 "trsvcid": "4420", 00:11:40.967 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:40.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:40.968 "prchk_reftag": false, 00:11:40.968 "prchk_guard": false, 00:11:40.968 "hdgst": false, 00:11:40.968 "ddgst": false, 00:11:40.968 "dhchap_key": "key2", 00:11:40.968 "allow_unrecognized_csi": false, 00:11:40.968 "method": "bdev_nvme_attach_controller", 00:11:40.968 "req_id": 1 00:11:40.968 } 00:11:40.968 Got JSON-RPC error response 00:11:40.968 response: 00:11:40.968 { 00:11:40.968 "code": -5, 00:11:40.968 "message": "Input/output error" 00:11:40.968 } 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:40.968 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:41.227 request: 00:11:41.227 { 00:11:41.227 "name": "nvme0", 00:11:41.227 "trtype": "tcp", 00:11:41.227 "traddr": "10.0.0.3", 00:11:41.227 "adrfam": "ipv4", 00:11:41.227 "trsvcid": "4420", 00:11:41.227 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:41.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:41.227 "prchk_reftag": false, 00:11:41.227 "prchk_guard": false, 00:11:41.227 "hdgst": false, 00:11:41.227 "ddgst": false, 00:11:41.227 "dhchap_key": "key1", 00:11:41.227 "dhchap_ctrlr_key": "ckey2", 00:11:41.227 "allow_unrecognized_csi": false, 00:11:41.227 "method": "bdev_nvme_attach_controller", 00:11:41.227 "req_id": 1 00:11:41.227 } 00:11:41.227 Got JSON-RPC error response 00:11:41.227 response: 00:11:41.227 { 00:11:41.227 "code": -5, 00:11:41.227 "message": "Input/output error" 00:11:41.227 } 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.485 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.486 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.054 request: 00:11:42.054 { 00:11:42.054 "name": "nvme0", 00:11:42.054 "trtype": "tcp", 00:11:42.054 "traddr": "10.0.0.3", 00:11:42.054 "adrfam": "ipv4", 00:11:42.054 "trsvcid": "4420", 00:11:42.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:42.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:42.054 "prchk_reftag": false, 00:11:42.054 "prchk_guard": false, 00:11:42.054 "hdgst": false, 00:11:42.054 "ddgst": false, 00:11:42.054 "dhchap_key": "key1", 00:11:42.054 "dhchap_ctrlr_key": "ckey1", 00:11:42.054 "allow_unrecognized_csi": false, 00:11:42.054 "method": "bdev_nvme_attach_controller", 00:11:42.054 "req_id": 1 00:11:42.054 } 00:11:42.054 Got JSON-RPC error response 00:11:42.054 response: 00:11:42.054 { 00:11:42.054 "code": -5, 00:11:42.054 "message": "Input/output error" 00:11:42.054 } 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67228 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67228 ']' 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67228 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67228 00:11:42.054 killing process with pid 67228 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67228' 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67228 00:11:42.054 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67228 00:11:42.313 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70148 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70148 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70148 ']' 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.314 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70148 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70148 ']' 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.573 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.832 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.833 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:42.833 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:11:42.833 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.833 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.833 null0 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.t3k 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.sOf ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sOf 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XRU 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k9h ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k9h 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6t9 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.lgD ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lgD 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZOv 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.092 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.052 nvme0n1 00:11:44.053 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.053 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.053 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.053 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.053 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.053 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.053 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.311 { 00:11:44.311 "cntlid": 1, 00:11:44.311 "qid": 0, 00:11:44.311 "state": "enabled", 00:11:44.311 "thread": "nvmf_tgt_poll_group_000", 00:11:44.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:44.311 "listen_address": { 00:11:44.311 "trtype": "TCP", 00:11:44.311 "adrfam": "IPv4", 00:11:44.311 "traddr": "10.0.0.3", 00:11:44.311 "trsvcid": "4420" 00:11:44.311 }, 00:11:44.311 "peer_address": { 00:11:44.311 "trtype": "TCP", 00:11:44.311 "adrfam": "IPv4", 00:11:44.311 "traddr": "10.0.0.1", 00:11:44.311 "trsvcid": "54088" 00:11:44.311 }, 00:11:44.311 "auth": { 00:11:44.311 "state": "completed", 00:11:44.311 "digest": "sha512", 00:11:44.311 "dhgroup": "ffdhe8192" 00:11:44.311 } 00:11:44.311 } 00:11:44.311 ]' 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.311 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.570 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:44.570 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:45.137 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key3 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:45.397 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.656 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.915 request: 00:11:45.915 { 00:11:45.915 "name": "nvme0", 00:11:45.915 "trtype": "tcp", 00:11:45.915 "traddr": "10.0.0.3", 00:11:45.915 "adrfam": "ipv4", 00:11:45.915 "trsvcid": "4420", 00:11:45.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:45.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:45.915 "prchk_reftag": false, 00:11:45.915 "prchk_guard": false, 00:11:45.915 "hdgst": false, 00:11:45.915 "ddgst": false, 00:11:45.915 "dhchap_key": "key3", 00:11:45.915 "allow_unrecognized_csi": false, 00:11:45.915 "method": "bdev_nvme_attach_controller", 00:11:45.915 "req_id": 1 00:11:45.915 } 00:11:45.915 Got JSON-RPC error response 00:11:45.915 response: 00:11:45.915 { 00:11:45.915 "code": -5, 00:11:45.915 "message": "Input/output error" 00:11:45.915 } 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:45.915 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.174 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.433 request: 00:11:46.433 { 00:11:46.433 "name": "nvme0", 00:11:46.433 "trtype": "tcp", 00:11:46.433 "traddr": "10.0.0.3", 00:11:46.433 "adrfam": "ipv4", 00:11:46.433 "trsvcid": "4420", 00:11:46.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:46.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:46.433 "prchk_reftag": false, 00:11:46.433 "prchk_guard": false, 00:11:46.433 "hdgst": false, 00:11:46.433 "ddgst": false, 00:11:46.433 "dhchap_key": "key3", 00:11:46.433 "allow_unrecognized_csi": false, 00:11:46.433 "method": "bdev_nvme_attach_controller", 00:11:46.433 "req_id": 1 00:11:46.433 } 00:11:46.433 Got JSON-RPC error response 00:11:46.433 response: 00:11:46.433 { 00:11:46.433 "code": -5, 00:11:46.433 "message": "Input/output error" 00:11:46.433 } 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:46.433 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:46.692 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:46.951 request: 00:11:46.951 { 00:11:46.951 "name": "nvme0", 00:11:46.951 "trtype": "tcp", 00:11:46.951 "traddr": "10.0.0.3", 00:11:46.951 "adrfam": "ipv4", 00:11:46.951 "trsvcid": "4420", 00:11:46.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:46.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:46.951 "prchk_reftag": false, 00:11:46.951 "prchk_guard": false, 00:11:46.951 "hdgst": false, 00:11:46.951 "ddgst": false, 00:11:46.951 "dhchap_key": "key0", 00:11:46.951 "dhchap_ctrlr_key": "key1", 00:11:46.951 "allow_unrecognized_csi": false, 00:11:46.951 "method": "bdev_nvme_attach_controller", 00:11:46.951 "req_id": 1 00:11:46.951 } 00:11:46.951 Got JSON-RPC error response 00:11:46.951 response: 00:11:46.951 { 00:11:46.951 "code": -5, 00:11:46.951 "message": "Input/output error" 00:11:46.951 } 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:46.951 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:47.518 nvme0n1 00:11:47.518 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:47.518 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:47.518 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.777 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.777 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.777 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.035 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 00:11:48.035 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.035 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.035 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.035 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:48.035 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:48.035 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:48.601 nvme0n1 00:11:48.860 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:48.860 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:48.860 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.860 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:49.119 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.119 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:49.119 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid c87b64e3-aa64-4edb-937d-9804b9d918ba -l 0 --dhchap-secret DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: --dhchap-ctrl-secret DHHC-1:03:YWFiOWMwM2E5N2Y4MDZjOTZjMzI1MTg2NWRiMTliNmY2M2E1NTllZjM0NzhlYTNjZjBhMjliZWJiYWQ1ZmNhZaxzHTU=: 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.686 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:49.945 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:50.515 request: 00:11:50.515 { 00:11:50.515 "name": "nvme0", 00:11:50.515 "trtype": "tcp", 00:11:50.515 "traddr": "10.0.0.3", 00:11:50.515 "adrfam": "ipv4", 00:11:50.515 "trsvcid": "4420", 00:11:50.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:50.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba", 00:11:50.515 "prchk_reftag": false, 00:11:50.515 "prchk_guard": false, 00:11:50.515 "hdgst": false, 00:11:50.515 "ddgst": false, 00:11:50.515 "dhchap_key": "key1", 00:11:50.515 "allow_unrecognized_csi": false, 00:11:50.515 "method": "bdev_nvme_attach_controller", 00:11:50.515 "req_id": 1 00:11:50.515 } 00:11:50.515 Got JSON-RPC error response 00:11:50.515 response: 00:11:50.515 { 00:11:50.515 "code": -5, 00:11:50.515 "message": "Input/output error" 00:11:50.515 } 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:50.515 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:51.450 nvme0n1 00:11:51.450 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:51.450 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:51.450 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.708 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.708 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.708 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:51.967 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:52.226 nvme0n1 00:11:52.226 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:52.226 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.226 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:52.484 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.484 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.484 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: '' 2s 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: ]] 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODRiNmZmZmJmZTBjNDU1YzNiZGY2NGJjYzNkZWQ2YzkmHzrD: 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:52.743 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.645 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: 2s 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: ]] 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmY0MDNmZmM2MDAyNGM1NjkwMGI3OTc4YmJmNjZmYTFhYzhlZmY4YjI0MWIyNmI3UjdgXg==: 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:54.904 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:56.807 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:57.743 nvme0n1 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:57.743 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:58.310 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:11:58.310 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:11:58.310 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:11:58.569 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:11:58.828 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:11:58.828 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:11:58.828 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:59.087 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:59.653 request: 00:11:59.653 { 00:11:59.653 "name": "nvme0", 00:11:59.653 "dhchap_key": "key1", 00:11:59.653 "dhchap_ctrlr_key": "key3", 00:11:59.653 "method": "bdev_nvme_set_keys", 00:11:59.653 "req_id": 1 00:11:59.653 } 00:11:59.653 Got JSON-RPC error response 00:11:59.653 response: 00:11:59.653 { 00:11:59.653 "code": -13, 00:11:59.653 "message": "Permission denied" 00:11:59.653 } 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:59.653 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.911 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:11:59.911 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:01.285 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:02.220 nvme0n1 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:02.221 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:02.788 request: 00:12:02.788 { 00:12:02.788 "name": "nvme0", 00:12:02.788 "dhchap_key": "key2", 00:12:02.788 "dhchap_ctrlr_key": "key0", 00:12:02.788 "method": "bdev_nvme_set_keys", 00:12:02.788 "req_id": 1 00:12:02.788 } 00:12:02.788 Got JSON-RPC error response 00:12:02.788 response: 00:12:02.788 { 00:12:02.788 "code": -13, 00:12:02.788 "message": "Permission denied" 00:12:02.788 } 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:02.788 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.789 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:03.048 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:03.048 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:03.985 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:03.985 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:03.985 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67257 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67257 ']' 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67257 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.244 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67257 00:12:04.503 killing process with pid 67257 00:12:04.503 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.503 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.503 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67257' 00:12:04.503 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67257 00:12:04.503 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67257 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.762 rmmod nvme_tcp 00:12:04.762 rmmod nvme_fabrics 00:12:04.762 rmmod nvme_keyring 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70148 ']' 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70148 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70148 ']' 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70148 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70148 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.762 killing process with pid 70148 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70148' 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70148 00:12:04.762 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70148 00:12:05.020 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.020 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:05.021 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.t3k /tmp/spdk.key-sha256.XRU /tmp/spdk.key-sha384.6t9 /tmp/spdk.key-sha512.ZOv /tmp/spdk.key-sha512.sOf /tmp/spdk.key-sha384.k9h /tmp/spdk.key-sha256.lgD '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:05.279 00:12:05.279 real 2m53.023s 00:12:05.279 user 6m54.902s 00:12:05.279 sys 0m26.733s 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.279 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.280 ************************************ 00:12:05.280 END TEST nvmf_auth_target 00:12:05.280 ************************************ 00:12:05.280 13:21:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:05.280 13:21:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:05.280 13:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.280 13:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.280 13:21:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.539 ************************************ 00:12:05.539 START TEST nvmf_bdevio_no_huge 00:12:05.540 ************************************ 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:05.540 * Looking for test storage... 00:12:05.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.540 --rc genhtml_branch_coverage=1 00:12:05.540 --rc genhtml_function_coverage=1 00:12:05.540 --rc genhtml_legend=1 00:12:05.540 --rc geninfo_all_blocks=1 00:12:05.540 --rc geninfo_unexecuted_blocks=1 00:12:05.540 00:12:05.540 ' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.540 --rc genhtml_branch_coverage=1 00:12:05.540 --rc genhtml_function_coverage=1 00:12:05.540 --rc genhtml_legend=1 00:12:05.540 --rc geninfo_all_blocks=1 00:12:05.540 --rc geninfo_unexecuted_blocks=1 00:12:05.540 00:12:05.540 ' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.540 --rc genhtml_branch_coverage=1 00:12:05.540 --rc genhtml_function_coverage=1 00:12:05.540 --rc genhtml_legend=1 00:12:05.540 --rc geninfo_all_blocks=1 00:12:05.540 --rc geninfo_unexecuted_blocks=1 00:12:05.540 00:12:05.540 ' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.540 --rc genhtml_branch_coverage=1 00:12:05.540 --rc genhtml_function_coverage=1 00:12:05.540 --rc genhtml_legend=1 00:12:05.540 --rc geninfo_all_blocks=1 00:12:05.540 --rc geninfo_unexecuted_blocks=1 00:12:05.540 00:12:05.540 ' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.540 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:05.541 Cannot find device "nvmf_init_br" 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:05.541 Cannot find device "nvmf_init_br2" 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:05.541 Cannot find device "nvmf_tgt_br" 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:05.541 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.800 Cannot find device "nvmf_tgt_br2" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:05.800 Cannot find device "nvmf_init_br" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:05.800 Cannot find device "nvmf_init_br2" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:05.800 Cannot find device "nvmf_tgt_br" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:05.800 Cannot find device "nvmf_tgt_br2" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:05.800 Cannot find device "nvmf_br" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:05.800 Cannot find device "nvmf_init_if" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:05.800 Cannot find device "nvmf_init_if2" 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:05.800 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.800 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:06.060 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:06.060 00:12:06.060 --- 10.0.0.3 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:06.060 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:12:06.060 00:12:06.060 --- 10.0.0.4 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:06.060 00:12:06.060 --- 10.0.0.1 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:06.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:06.060 00:12:06.060 --- 10.0.0.2 ping statistics --- 00:12:06.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.060 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70779 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70779 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70779 ']' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.060 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.060 [2024-11-17 13:21:55.116470] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:06.060 [2024-11-17 13:21:55.116529] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:06.060 [2024-11-17 13:21:55.265290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.319 [2024-11-17 13:21:55.321999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.319 [2024-11-17 13:21:55.322049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.319 [2024-11-17 13:21:55.322075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.319 [2024-11-17 13:21:55.322082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.319 [2024-11-17 13:21:55.322089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.319 [2024-11-17 13:21:55.322819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:06.319 [2024-11-17 13:21:55.322968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:06.319 [2024-11-17 13:21:55.323128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:06.319 [2024-11-17 13:21:55.323468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.319 [2024-11-17 13:21:55.327573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 [2024-11-17 13:21:56.042640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 Malloc0 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 [2024-11-17 13:21:56.090917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:06.887 { 00:12:06.887 "params": { 00:12:06.887 "name": "Nvme$subsystem", 00:12:06.887 "trtype": "$TEST_TRANSPORT", 00:12:06.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.887 "adrfam": "ipv4", 00:12:06.887 "trsvcid": "$NVMF_PORT", 00:12:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.887 "hdgst": ${hdgst:-false}, 00:12:06.887 "ddgst": ${ddgst:-false} 00:12:06.887 }, 00:12:06.887 "method": "bdev_nvme_attach_controller" 00:12:06.887 } 00:12:06.887 EOF 00:12:06.887 )") 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:06.887 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:06.887 "params": { 00:12:06.887 "name": "Nvme1", 00:12:06.887 "trtype": "tcp", 00:12:06.887 "traddr": "10.0.0.3", 00:12:06.887 "adrfam": "ipv4", 00:12:06.887 "trsvcid": "4420", 00:12:06.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.887 "hdgst": false, 00:12:06.887 "ddgst": false 00:12:06.887 }, 00:12:06.887 "method": "bdev_nvme_attach_controller" 00:12:06.887 }' 00:12:07.146 [2024-11-17 13:21:56.152409] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:07.146 [2024-11-17 13:21:56.152665] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70814 ] 00:12:07.146 [2024-11-17 13:21:56.313248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.406 [2024-11-17 13:21:56.370565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.406 [2024-11-17 13:21:56.370717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.406 [2024-11-17 13:21:56.370727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.406 [2024-11-17 13:21:56.383456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.406 I/O targets: 00:12:07.406 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:07.406 00:12:07.406 00:12:07.406 CUnit - A unit testing framework for C - Version 2.1-3 00:12:07.406 http://cunit.sourceforge.net/ 00:12:07.406 00:12:07.406 00:12:07.406 Suite: bdevio tests on: Nvme1n1 00:12:07.406 Test: blockdev write read block ...passed 00:12:07.406 Test: blockdev write zeroes read block ...passed 00:12:07.406 Test: blockdev write zeroes read no split ...passed 00:12:07.406 Test: blockdev write zeroes read split ...passed 00:12:07.406 Test: blockdev write zeroes read split partial ...passed 00:12:07.406 Test: blockdev reset ...[2024-11-17 13:21:56.605698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:07.406 [2024-11-17 13:21:56.606143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713310 (9): Bad file descriptor 00:12:07.406 [2024-11-17 13:21:56.626682] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:07.666 passed 00:12:07.666 Test: blockdev write read 8 blocks ...passed 00:12:07.666 Test: blockdev write read size > 128k ...passed 00:12:07.666 Test: blockdev write read invalid size ...passed 00:12:07.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:07.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:07.666 Test: blockdev write read max offset ...passed 00:12:07.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:07.666 Test: blockdev writev readv 8 blocks ...passed 00:12:07.666 Test: blockdev writev readv 30 x 1block ...passed 00:12:07.666 Test: blockdev writev readv block ...passed 00:12:07.666 Test: blockdev writev readv size > 128k ...passed 00:12:07.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:07.666 Test: blockdev comparev and writev ...[2024-11-17 13:21:56.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.636878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.636906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.636923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.637282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.637334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.637363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.637385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.637810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.637852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.637909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.637927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.638332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.638369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.638397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:07.666 [2024-11-17 13:21:56.638414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:07.666 passed 00:12:07.666 Test: blockdev nvme passthru rw ...passed 00:12:07.666 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:21:56.639803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.666 [2024-11-17 13:21:56.639855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.640048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.666 [2024-11-17 13:21:56.640074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:07.666 [2024-11-17 13:21:56.640251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.666 [2024-11-17 13:21:56.640282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:07.666 passed 00:12:07.666 Test: blockdev nvme admin passthru ...[2024-11-17 13:21:56.640530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.666 [2024-11-17 13:21:56.640561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:07.666 passed 00:12:07.666 Test: blockdev copy ...passed 00:12:07.666 00:12:07.666 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.666 suites 1 1 n/a 0 0 00:12:07.666 tests 23 23 23 0 0 00:12:07.666 asserts 152 152 152 0 n/a 00:12:07.666 00:12:07.666 Elapsed time = 0.174 seconds 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.925 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.925 rmmod nvme_tcp 00:12:07.925 rmmod nvme_fabrics 00:12:07.925 rmmod nvme_keyring 00:12:08.184 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.184 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70779 ']' 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70779 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70779 ']' 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70779 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70779 00:12:08.185 killing process with pid 70779 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70779' 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70779 00:12:08.185 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70779 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.444 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:08.703 00:12:08.703 real 0m3.329s 00:12:08.703 user 0m10.369s 00:12:08.703 sys 0m1.278s 00:12:08.703 ************************************ 00:12:08.703 END TEST nvmf_bdevio_no_huge 00:12:08.703 ************************************ 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.703 ************************************ 00:12:08.703 START TEST nvmf_tls 00:12:08.703 ************************************ 00:12:08.703 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:08.963 * Looking for test storage... 00:12:08.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.963 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.963 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.963 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.963 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.964 --rc genhtml_branch_coverage=1 00:12:08.964 --rc genhtml_function_coverage=1 00:12:08.964 --rc genhtml_legend=1 00:12:08.964 --rc geninfo_all_blocks=1 00:12:08.964 --rc geninfo_unexecuted_blocks=1 00:12:08.964 00:12:08.964 ' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.964 --rc genhtml_branch_coverage=1 00:12:08.964 --rc genhtml_function_coverage=1 00:12:08.964 --rc genhtml_legend=1 00:12:08.964 --rc geninfo_all_blocks=1 00:12:08.964 --rc geninfo_unexecuted_blocks=1 00:12:08.964 00:12:08.964 ' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.964 --rc genhtml_branch_coverage=1 00:12:08.964 --rc genhtml_function_coverage=1 00:12:08.964 --rc genhtml_legend=1 00:12:08.964 --rc geninfo_all_blocks=1 00:12:08.964 --rc geninfo_unexecuted_blocks=1 00:12:08.964 00:12:08.964 ' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.964 --rc genhtml_branch_coverage=1 00:12:08.964 --rc genhtml_function_coverage=1 00:12:08.964 --rc genhtml_legend=1 00:12:08.964 --rc geninfo_all_blocks=1 00:12:08.964 --rc geninfo_unexecuted_blocks=1 00:12:08.964 00:12:08.964 ' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:08.964 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.965 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:08.965 Cannot find device "nvmf_init_br" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:08.965 Cannot find device "nvmf_init_br2" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:08.965 Cannot find device "nvmf_tgt_br" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.965 Cannot find device "nvmf_tgt_br2" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:08.965 Cannot find device "nvmf_init_br" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:08.965 Cannot find device "nvmf_init_br2" 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:08.965 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:09.224 Cannot find device "nvmf_tgt_br" 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:09.224 Cannot find device "nvmf_tgt_br2" 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:09.224 Cannot find device "nvmf_br" 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:09.224 Cannot find device "nvmf_init_if" 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:09.224 Cannot find device "nvmf_init_if2" 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:09.224 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.225 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:09.225 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.225 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:09.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:09.484 00:12:09.484 --- 10.0.0.3 ping statistics --- 00:12:09.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.484 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:09.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:09.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:12:09.484 00:12:09.484 --- 10.0.0.4 ping statistics --- 00:12:09.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.484 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:09.484 00:12:09.484 --- 10.0.0.1 ping statistics --- 00:12:09.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.484 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:09.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:09.484 00:12:09.484 --- 10.0.0.2 ping statistics --- 00:12:09.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.484 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71049 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71049 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71049 ']' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.484 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.484 [2024-11-17 13:21:58.559813] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:09.484 [2024-11-17 13:21:58.560046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.484 [2024-11-17 13:21:58.698907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.743 [2024-11-17 13:21:58.740585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.743 [2024-11-17 13:21:58.740900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.743 [2024-11-17 13:21:58.741055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.743 [2024-11-17 13:21:58.741292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.743 [2024-11-17 13:21:58.741339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.743 [2024-11-17 13:21:58.741708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:09.743 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:10.002 true 00:12:10.002 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.002 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:10.260 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:10.260 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:10.260 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:10.518 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:10.518 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:10.518 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:10.518 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:10.518 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:10.777 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:10.777 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.036 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:11.036 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:11.036 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:11.036 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.294 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:11.294 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:11.294 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:11.553 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:11.553 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:11.828 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:11.828 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:11.828 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:12.118 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:12.118 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.EGJvGSpe2d 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.daCugYrxRZ 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.EGJvGSpe2d 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.daCugYrxRZ 00:12:12.385 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:12.644 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:13.210 [2024-11-17 13:22:02.147542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.210 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.EGJvGSpe2d 00:12:13.210 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EGJvGSpe2d 00:12:13.210 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:13.210 [2024-11-17 13:22:02.395375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.210 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:13.778 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:13.778 [2024-11-17 13:22:02.895426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:13.778 [2024-11-17 13:22:02.895597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:13.778 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:14.036 malloc0 00:12:14.036 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:14.295 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EGJvGSpe2d 00:12:14.553 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:14.812 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EGJvGSpe2d 00:12:24.789 Initializing NVMe Controllers 00:12:24.789 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:24.789 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:24.789 Initialization complete. Launching workers. 00:12:24.789 ======================================================== 00:12:24.789 Latency(us) 00:12:24.789 Device Information : IOPS MiB/s Average min max 00:12:24.789 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11637.27 45.46 5500.59 892.80 8606.18 00:12:24.789 ======================================================== 00:12:24.789 Total : 11637.27 45.46 5500.59 892.80 8606.18 00:12:24.789 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGJvGSpe2d 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EGJvGSpe2d 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71273 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71273 /var/tmp/bdevperf.sock 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71273 ']' 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:25.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.050 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:25.050 [2024-11-17 13:22:14.072306] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:25.050 [2024-11-17 13:22:14.072604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71273 ] 00:12:25.050 [2024-11-17 13:22:14.227548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.309 [2024-11-17 13:22:14.280486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.309 [2024-11-17 13:22:14.336636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:25.309 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.309 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:25.309 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EGJvGSpe2d 00:12:25.568 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:25.827 [2024-11-17 13:22:14.915395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:25.827 TLSTESTn1 00:12:25.827 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:26.086 Running I/O for 10 seconds... 00:12:27.958 4699.00 IOPS, 18.36 MiB/s [2024-11-17T13:22:18.118Z] 4776.50 IOPS, 18.66 MiB/s [2024-11-17T13:22:19.495Z] 4807.00 IOPS, 18.78 MiB/s [2024-11-17T13:22:20.431Z] 4814.50 IOPS, 18.81 MiB/s [2024-11-17T13:22:21.367Z] 4826.20 IOPS, 18.85 MiB/s [2024-11-17T13:22:22.304Z] 4835.67 IOPS, 18.89 MiB/s [2024-11-17T13:22:23.241Z] 4842.86 IOPS, 18.92 MiB/s [2024-11-17T13:22:24.177Z] 4845.25 IOPS, 18.93 MiB/s [2024-11-17T13:22:25.114Z] 4849.78 IOPS, 18.94 MiB/s [2024-11-17T13:22:25.114Z] 4850.70 IOPS, 18.95 MiB/s 00:12:35.890 Latency(us) 00:12:35.890 [2024-11-17T13:22:25.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:35.890 Verification LBA range: start 0x0 length 0x2000 00:12:35.890 TLSTESTn1 : 10.01 4856.43 18.97 0.00 0.00 26312.86 5183.30 20852.36 00:12:35.890 [2024-11-17T13:22:25.114Z] =================================================================================================================== 00:12:35.890 [2024-11-17T13:22:25.114Z] Total : 4856.43 18.97 0.00 0.00 26312.86 5183.30 20852.36 00:12:35.890 { 00:12:35.890 "results": [ 00:12:35.890 { 00:12:35.890 "job": "TLSTESTn1", 00:12:35.890 "core_mask": "0x4", 00:12:35.890 "workload": "verify", 00:12:35.890 "status": "finished", 00:12:35.890 "verify_range": { 00:12:35.890 "start": 0, 00:12:35.890 "length": 8192 00:12:35.890 }, 00:12:35.890 "queue_depth": 128, 00:12:35.890 "io_size": 4096, 00:12:35.890 "runtime": 10.01436, 00:12:35.890 "iops": 4856.426172016983, 00:12:35.890 "mibps": 18.97041473444134, 00:12:35.890 "io_failed": 0, 00:12:35.890 "io_timeout": 0, 00:12:35.890 "avg_latency_us": 26312.857618949703, 00:12:35.890 "min_latency_us": 5183.301818181818, 00:12:35.890 "max_latency_us": 20852.363636363636 00:12:35.890 } 00:12:35.890 ], 00:12:35.890 "core_count": 1 00:12:35.890 } 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71273 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71273 ']' 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71273 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71273 00:12:36.149 killing process with pid 71273 00:12:36.149 Received shutdown signal, test time was about 10.000000 seconds 00:12:36.149 00:12:36.149 Latency(us) 00:12:36.149 [2024-11-17T13:22:25.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.149 [2024-11-17T13:22:25.373Z] =================================================================================================================== 00:12:36.149 [2024-11-17T13:22:25.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71273' 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71273 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71273 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.daCugYrxRZ 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.daCugYrxRZ 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.daCugYrxRZ 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.daCugYrxRZ 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71401 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71401 /var/tmp/bdevperf.sock 00:12:36.149 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71401 ']' 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.150 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.409 [2024-11-17 13:22:25.394861] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:36.409 [2024-11-17 13:22:25.395169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:12:36.409 [2024-11-17 13:22:25.535309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.409 [2024-11-17 13:22:25.582316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.667 [2024-11-17 13:22:25.631642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:37.236 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.236 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:37.236 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.daCugYrxRZ 00:12:37.495 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:37.754 [2024-11-17 13:22:26.806787] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:37.754 [2024-11-17 13:22:26.811706] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:37.754 [2024-11-17 13:22:26.812331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbcfb0 (107): Transport endpoint is not connected 00:12:37.754 [2024-11-17 13:22:26.813317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbcfb0 (9): Bad file descriptor 00:12:37.754 [2024-11-17 13:22:26.814314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:37.754 [2024-11-17 13:22:26.814462] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:37.754 [2024-11-17 13:22:26.814493] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:37.754 [2024-11-17 13:22:26.814509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:37.754 request: 00:12:37.754 { 00:12:37.754 "name": "TLSTEST", 00:12:37.754 "trtype": "tcp", 00:12:37.754 "traddr": "10.0.0.3", 00:12:37.754 "adrfam": "ipv4", 00:12:37.754 "trsvcid": "4420", 00:12:37.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.754 "prchk_reftag": false, 00:12:37.754 "prchk_guard": false, 00:12:37.754 "hdgst": false, 00:12:37.754 "ddgst": false, 00:12:37.754 "psk": "key0", 00:12:37.754 "allow_unrecognized_csi": false, 00:12:37.754 "method": "bdev_nvme_attach_controller", 00:12:37.754 "req_id": 1 00:12:37.754 } 00:12:37.754 Got JSON-RPC error response 00:12:37.754 response: 00:12:37.754 { 00:12:37.754 "code": -5, 00:12:37.754 "message": "Input/output error" 00:12:37.754 } 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71401 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71401 ']' 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71401 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71401 00:12:37.754 killing process with pid 71401 00:12:37.754 Received shutdown signal, test time was about 10.000000 seconds 00:12:37.754 00:12:37.754 Latency(us) 00:12:37.754 [2024-11-17T13:22:26.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.754 [2024-11-17T13:22:26.978Z] =================================================================================================================== 00:12:37.754 [2024-11-17T13:22:26.978Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71401' 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71401 00:12:37.754 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71401 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EGJvGSpe2d 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EGJvGSpe2d 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EGJvGSpe2d 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EGJvGSpe2d 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:38.013 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71430 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71430 /var/tmp/bdevperf.sock 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71430 ']' 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.014 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.014 [2024-11-17 13:22:27.066209] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:38.014 [2024-11-17 13:22:27.066446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71430 ] 00:12:38.014 [2024-11-17 13:22:27.196965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.014 [2024-11-17 13:22:27.233805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.272 [2024-11-17 13:22:27.284408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:38.272 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.272 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:38.272 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EGJvGSpe2d 00:12:38.532 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:12:38.791 [2024-11-17 13:22:27.820071] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:38.791 [2024-11-17 13:22:27.829624] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:38.792 [2024-11-17 13:22:27.829664] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:38.792 [2024-11-17 13:22:27.829717] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:38.792 [2024-11-17 13:22:27.830298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7afb0 (107): Transport endpoint is not connected 00:12:38.792 [2024-11-17 13:22:27.831289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7afb0 (9): Bad file descriptor 00:12:38.792 [2024-11-17 13:22:27.832287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:38.792 [2024-11-17 13:22:27.832311] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:38.792 [2024-11-17 13:22:27.832322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:38.792 [2024-11-17 13:22:27.832337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:38.792 request: 00:12:38.792 { 00:12:38.792 "name": "TLSTEST", 00:12:38.792 "trtype": "tcp", 00:12:38.792 "traddr": "10.0.0.3", 00:12:38.792 "adrfam": "ipv4", 00:12:38.792 "trsvcid": "4420", 00:12:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.792 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:38.792 "prchk_reftag": false, 00:12:38.792 "prchk_guard": false, 00:12:38.792 "hdgst": false, 00:12:38.792 "ddgst": false, 00:12:38.792 "psk": "key0", 00:12:38.792 "allow_unrecognized_csi": false, 00:12:38.792 "method": "bdev_nvme_attach_controller", 00:12:38.792 "req_id": 1 00:12:38.792 } 00:12:38.792 Got JSON-RPC error response 00:12:38.792 response: 00:12:38.792 { 00:12:38.792 "code": -5, 00:12:38.792 "message": "Input/output error" 00:12:38.792 } 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71430 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71430 ']' 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71430 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71430 00:12:38.792 killing process with pid 71430 00:12:38.792 Received shutdown signal, test time was about 10.000000 seconds 00:12:38.792 00:12:38.792 Latency(us) 00:12:38.792 [2024-11-17T13:22:28.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.792 [2024-11-17T13:22:28.016Z] =================================================================================================================== 00:12:38.792 [2024-11-17T13:22:28.016Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71430' 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71430 00:12:38.792 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71430 00:12:39.051 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:39.051 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:39.051 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGJvGSpe2d 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGJvGSpe2d 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EGJvGSpe2d 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EGJvGSpe2d 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71451 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71451 /var/tmp/bdevperf.sock 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71451 ']' 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.052 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.052 [2024-11-17 13:22:28.091721] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:39.052 [2024-11-17 13:22:28.092140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71451 ] 00:12:39.052 [2024-11-17 13:22:28.228932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.052 [2024-11-17 13:22:28.266967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.311 [2024-11-17 13:22:28.317377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.311 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.311 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:39.311 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EGJvGSpe2d 00:12:39.570 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:39.570 [2024-11-17 13:22:28.772692] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:39.570 [2024-11-17 13:22:28.779809] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:39.570 [2024-11-17 13:22:28.780211] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:39.570 [2024-11-17 13:22:28.780291] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:39.570 [2024-11-17 13:22:28.781157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1ffb0 (107): Transport endpoint is not connected 00:12:39.570 [2024-11-17 13:22:28.782146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1ffb0 (9): Bad file descriptor 00:12:39.570 [2024-11-17 13:22:28.783144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:12:39.570 [2024-11-17 13:22:28.783392] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:39.570 [2024-11-17 13:22:28.783507] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:12:39.570 [2024-11-17 13:22:28.783532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:12:39.570 request: 00:12:39.570 { 00:12:39.570 "name": "TLSTEST", 00:12:39.570 "trtype": "tcp", 00:12:39.570 "traddr": "10.0.0.3", 00:12:39.570 "adrfam": "ipv4", 00:12:39.570 "trsvcid": "4420", 00:12:39.570 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:39.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.570 "prchk_reftag": false, 00:12:39.570 "prchk_guard": false, 00:12:39.570 "hdgst": false, 00:12:39.570 "ddgst": false, 00:12:39.570 "psk": "key0", 00:12:39.570 "allow_unrecognized_csi": false, 00:12:39.570 "method": "bdev_nvme_attach_controller", 00:12:39.570 "req_id": 1 00:12:39.570 } 00:12:39.570 Got JSON-RPC error response 00:12:39.570 response: 00:12:39.570 { 00:12:39.570 "code": -5, 00:12:39.570 "message": "Input/output error" 00:12:39.570 } 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71451 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71451 ']' 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71451 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.829 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71451 00:12:39.829 killing process with pid 71451 00:12:39.829 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.829 00:12:39.829 Latency(us) 00:12:39.829 [2024-11-17T13:22:29.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.829 [2024-11-17T13:22:29.053Z] =================================================================================================================== 00:12:39.829 [2024-11-17T13:22:29.053Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71451' 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71451 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71451 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71472 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:39.830 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71472 /var/tmp/bdevperf.sock 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71472 ']' 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.830 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.096 [2024-11-17 13:22:29.059900] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:40.096 [2024-11-17 13:22:29.060213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71472 ] 00:12:40.096 [2024-11-17 13:22:29.205161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.096 [2024-11-17 13:22:29.238807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.096 [2024-11-17 13:22:29.288372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:40.353 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.353 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:40.353 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:12:40.610 [2024-11-17 13:22:29.627287] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:12:40.610 [2024-11-17 13:22:29.627472] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:40.610 request: 00:12:40.610 { 00:12:40.610 "name": "key0", 00:12:40.610 "path": "", 00:12:40.610 "method": "keyring_file_add_key", 00:12:40.610 "req_id": 1 00:12:40.610 } 00:12:40.610 Got JSON-RPC error response 00:12:40.610 response: 00:12:40.610 { 00:12:40.610 "code": -1, 00:12:40.610 "message": "Operation not permitted" 00:12:40.610 } 00:12:40.610 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:40.868 [2024-11-17 13:22:29.843422] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:40.868 [2024-11-17 13:22:29.843628] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:40.868 request: 00:12:40.868 { 00:12:40.868 "name": "TLSTEST", 00:12:40.868 "trtype": "tcp", 00:12:40.868 "traddr": "10.0.0.3", 00:12:40.868 "adrfam": "ipv4", 00:12:40.868 "trsvcid": "4420", 00:12:40.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.868 "prchk_reftag": false, 00:12:40.868 "prchk_guard": false, 00:12:40.868 "hdgst": false, 00:12:40.868 "ddgst": false, 00:12:40.868 "psk": "key0", 00:12:40.868 "allow_unrecognized_csi": false, 00:12:40.868 "method": "bdev_nvme_attach_controller", 00:12:40.868 "req_id": 1 00:12:40.868 } 00:12:40.868 Got JSON-RPC error response 00:12:40.868 response: 00:12:40.868 { 00:12:40.868 "code": -126, 00:12:40.868 "message": "Required key not available" 00:12:40.868 } 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71472 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71472 ']' 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71472 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71472 00:12:40.868 killing process with pid 71472 00:12:40.868 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.868 00:12:40.868 Latency(us) 00:12:40.868 [2024-11-17T13:22:30.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.868 [2024-11-17T13:22:30.092Z] =================================================================================================================== 00:12:40.868 [2024-11-17T13:22:30.092Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71472' 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71472 00:12:40.868 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71472 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71049 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71049 ']' 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71049 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.868 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71049 00:12:41.127 killing process with pid 71049 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71049' 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71049 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71049 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:12:41.128 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.4dOTcdtkO4 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.4dOTcdtkO4 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71503 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71503 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71503 ']' 00:12:41.386 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.387 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.387 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.387 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.387 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.387 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.387 [2024-11-17 13:22:30.475980] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:41.387 [2024-11-17 13:22:30.476082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.645 [2024-11-17 13:22:30.619911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.645 [2024-11-17 13:22:30.666300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.645 [2024-11-17 13:22:30.666360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.645 [2024-11-17 13:22:30.666372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.646 [2024-11-17 13:22:30.666379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.646 [2024-11-17 13:22:30.666385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.646 [2024-11-17 13:22:30.666748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.646 [2024-11-17 13:22:30.737146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.213 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.213 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:42.213 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.213 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.213 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.471 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.471 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:12:42.471 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4dOTcdtkO4 00:12:42.471 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:42.471 [2024-11-17 13:22:31.642199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.471 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:43.039 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:43.039 [2024-11-17 13:22:32.146302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:43.039 [2024-11-17 13:22:32.146528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:43.039 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:43.298 malloc0 00:12:43.298 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:43.558 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:12:43.817 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4dOTcdtkO4 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4dOTcdtkO4 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71564 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71564 /var/tmp/bdevperf.sock 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71564 ']' 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.076 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:44.076 [2024-11-17 13:22:33.189503] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:44.076 [2024-11-17 13:22:33.189785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71564 ] 00:12:44.335 [2024-11-17 13:22:33.329672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.335 [2024-11-17 13:22:33.367357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.335 [2024-11-17 13:22:33.417174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:44.335 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:12:44.593 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:44.853 [2024-11-17 13:22:33.892457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:44.853 TLSTESTn1 00:12:44.853 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:45.125 Running I/O for 10 seconds... 00:12:47.027 4608.00 IOPS, 18.00 MiB/s [2024-11-17T13:22:37.187Z] 4755.50 IOPS, 18.58 MiB/s [2024-11-17T13:22:38.123Z] 4789.67 IOPS, 18.71 MiB/s [2024-11-17T13:22:39.500Z] 4807.75 IOPS, 18.78 MiB/s [2024-11-17T13:22:40.434Z] 4817.00 IOPS, 18.82 MiB/s [2024-11-17T13:22:41.375Z] 4822.83 IOPS, 18.84 MiB/s [2024-11-17T13:22:42.310Z] 4827.57 IOPS, 18.86 MiB/s [2024-11-17T13:22:43.246Z] 4830.25 IOPS, 18.87 MiB/s [2024-11-17T13:22:44.183Z] 4836.67 IOPS, 18.89 MiB/s [2024-11-17T13:22:44.183Z] 4841.70 IOPS, 18.91 MiB/s 00:12:54.959 Latency(us) 00:12:54.959 [2024-11-17T13:22:44.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.959 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:54.959 Verification LBA range: start 0x0 length 0x2000 00:12:54.959 TLSTESTn1 : 10.01 4847.75 18.94 0.00 0.00 26362.71 4796.04 20852.36 00:12:54.959 [2024-11-17T13:22:44.183Z] =================================================================================================================== 00:12:54.959 [2024-11-17T13:22:44.183Z] Total : 4847.75 18.94 0.00 0.00 26362.71 4796.04 20852.36 00:12:54.959 { 00:12:54.959 "results": [ 00:12:54.959 { 00:12:54.959 "job": "TLSTESTn1", 00:12:54.959 "core_mask": "0x4", 00:12:54.959 "workload": "verify", 00:12:54.959 "status": "finished", 00:12:54.959 "verify_range": { 00:12:54.959 "start": 0, 00:12:54.959 "length": 8192 00:12:54.959 }, 00:12:54.959 "queue_depth": 128, 00:12:54.959 "io_size": 4096, 00:12:54.959 "runtime": 10.013934, 00:12:54.959 "iops": 4847.745151905335, 00:12:54.959 "mibps": 18.936504499630214, 00:12:54.959 "io_failed": 0, 00:12:54.959 "io_timeout": 0, 00:12:54.959 "avg_latency_us": 26362.713099860484, 00:12:54.959 "min_latency_us": 4796.043636363636, 00:12:54.959 "max_latency_us": 20852.363636363636 00:12:54.959 } 00:12:54.959 ], 00:12:54.959 "core_count": 1 00:12:54.959 } 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71564 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71564 ']' 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71564 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.959 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71564 00:12:54.959 killing process with pid 71564 00:12:54.959 Received shutdown signal, test time was about 10.000000 seconds 00:12:54.960 00:12:54.960 Latency(us) 00:12:54.960 [2024-11-17T13:22:44.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.960 [2024-11-17T13:22:44.184Z] =================================================================================================================== 00:12:54.960 [2024-11-17T13:22:44.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:54.960 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:54.960 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:54.960 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71564' 00:12:54.960 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71564 00:12:54.960 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71564 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.4dOTcdtkO4 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4dOTcdtkO4 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4dOTcdtkO4 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:55.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4dOTcdtkO4 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4dOTcdtkO4 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71692 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71692 /var/tmp/bdevperf.sock 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71692 ']' 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.219 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.479 [2024-11-17 13:22:44.459561] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:55.479 [2024-11-17 13:22:44.459638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71692 ] 00:12:55.479 [2024-11-17 13:22:44.600327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.479 [2024-11-17 13:22:44.643441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.737 [2024-11-17 13:22:44.713933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:55.737 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.737 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:55.738 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:12:55.996 [2024-11-17 13:22:45.040817] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4dOTcdtkO4': 0100666 00:12:55.996 [2024-11-17 13:22:45.041158] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:55.996 request: 00:12:55.996 { 00:12:55.996 "name": "key0", 00:12:55.996 "path": "/tmp/tmp.4dOTcdtkO4", 00:12:55.996 "method": "keyring_file_add_key", 00:12:55.996 "req_id": 1 00:12:55.996 } 00:12:55.996 Got JSON-RPC error response 00:12:55.996 response: 00:12:55.996 { 00:12:55.996 "code": -1, 00:12:55.996 "message": "Operation not permitted" 00:12:55.996 } 00:12:55.996 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:56.256 [2024-11-17 13:22:45.320958] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:56.256 [2024-11-17 13:22:45.321162] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:56.256 request: 00:12:56.256 { 00:12:56.256 "name": "TLSTEST", 00:12:56.256 "trtype": "tcp", 00:12:56.256 "traddr": "10.0.0.3", 00:12:56.256 "adrfam": "ipv4", 00:12:56.256 "trsvcid": "4420", 00:12:56.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.256 "prchk_reftag": false, 00:12:56.256 "prchk_guard": false, 00:12:56.256 "hdgst": false, 00:12:56.256 "ddgst": false, 00:12:56.256 "psk": "key0", 00:12:56.256 "allow_unrecognized_csi": false, 00:12:56.256 "method": "bdev_nvme_attach_controller", 00:12:56.256 "req_id": 1 00:12:56.256 } 00:12:56.256 Got JSON-RPC error response 00:12:56.256 response: 00:12:56.256 { 00:12:56.256 "code": -126, 00:12:56.256 "message": "Required key not available" 00:12:56.256 } 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71692 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71692 ']' 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71692 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71692 00:12:56.256 killing process with pid 71692 00:12:56.256 Received shutdown signal, test time was about 10.000000 seconds 00:12:56.256 00:12:56.256 Latency(us) 00:12:56.256 [2024-11-17T13:22:45.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.256 [2024-11-17T13:22:45.480Z] =================================================================================================================== 00:12:56.256 [2024-11-17T13:22:45.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71692' 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71692 00:12:56.256 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71692 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71503 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71503 ']' 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71503 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71503 00:12:56.516 killing process with pid 71503 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71503' 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71503 00:12:56.516 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71503 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71718 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71718 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71718 ']' 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.775 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.775 [2024-11-17 13:22:45.879431] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:56.775 [2024-11-17 13:22:45.879776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.033 [2024-11-17 13:22:46.021865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.033 [2024-11-17 13:22:46.061563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.033 [2024-11-17 13:22:46.061619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.033 [2024-11-17 13:22:46.061628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.033 [2024-11-17 13:22:46.061635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.033 [2024-11-17 13:22:46.061641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.033 [2024-11-17 13:22:46.061993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.033 [2024-11-17 13:22:46.111388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4dOTcdtkO4 00:12:57.600 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:57.859 [2024-11-17 13:22:46.975112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.859 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:58.118 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:58.376 [2024-11-17 13:22:47.535233] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:58.376 [2024-11-17 13:22:47.535606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:58.376 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:58.635 malloc0 00:12:58.635 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:58.894 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:12:59.152 [2024-11-17 13:22:48.296851] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4dOTcdtkO4': 0100666 00:12:59.152 [2024-11-17 13:22:48.296896] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:59.152 request: 00:12:59.152 { 00:12:59.152 "name": "key0", 00:12:59.152 "path": "/tmp/tmp.4dOTcdtkO4", 00:12:59.152 "method": "keyring_file_add_key", 00:12:59.152 "req_id": 1 00:12:59.152 } 00:12:59.152 Got JSON-RPC error response 00:12:59.152 response: 00:12:59.152 { 00:12:59.152 "code": -1, 00:12:59.152 "message": "Operation not permitted" 00:12:59.152 } 00:12:59.152 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:59.411 [2024-11-17 13:22:48.508913] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:12:59.411 [2024-11-17 13:22:48.508962] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:59.411 request: 00:12:59.411 { 00:12:59.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.411 "host": "nqn.2016-06.io.spdk:host1", 00:12:59.411 "psk": "key0", 00:12:59.411 "method": "nvmf_subsystem_add_host", 00:12:59.411 "req_id": 1 00:12:59.411 } 00:12:59.411 Got JSON-RPC error response 00:12:59.411 response: 00:12:59.411 { 00:12:59.411 "code": -32603, 00:12:59.411 "message": "Internal error" 00:12:59.411 } 00:12:59.411 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:59.411 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.411 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71718 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71718 ']' 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71718 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71718 00:12:59.412 killing process with pid 71718 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71718' 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71718 00:12:59.412 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71718 00:12:59.670 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.4dOTcdtkO4 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71786 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71786 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71786 ']' 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.671 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.671 [2024-11-17 13:22:48.875593] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:59.671 [2024-11-17 13:22:48.875919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.929 [2024-11-17 13:22:49.013681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.929 [2024-11-17 13:22:49.054036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.929 [2024-11-17 13:22:49.054288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.929 [2024-11-17 13:22:49.054410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.929 [2024-11-17 13:22:49.054457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.929 [2024-11-17 13:22:49.054483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.929 [2024-11-17 13:22:49.054966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.929 [2024-11-17 13:22:49.125006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4dOTcdtkO4 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:00.863 [2024-11-17 13:22:49.974438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.863 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:01.122 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:01.381 [2024-11-17 13:22:50.470531] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:01.381 [2024-11-17 13:22:50.471030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:01.381 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:01.640 malloc0 00:13:01.640 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:01.899 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:02.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71843 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71843 /var/tmp/bdevperf.sock 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71843 ']' 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.158 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.417 [2024-11-17 13:22:51.430786] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:02.417 [2024-11-17 13:22:51.431093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71843 ] 00:13:02.417 [2024-11-17 13:22:51.584911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.676 [2024-11-17 13:22:51.645002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.676 [2024-11-17 13:22:51.722103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.244 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.244 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:03.244 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:13:03.502 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:03.761 [2024-11-17 13:22:52.854576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:03.761 TLSTESTn1 00:13:03.761 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:04.329 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:04.329 "subsystems": [ 00:13:04.329 { 00:13:04.329 "subsystem": "keyring", 00:13:04.329 "config": [ 00:13:04.329 { 00:13:04.329 "method": "keyring_file_add_key", 00:13:04.329 "params": { 00:13:04.329 "name": "key0", 00:13:04.329 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:04.329 } 00:13:04.329 } 00:13:04.329 ] 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "subsystem": "iobuf", 00:13:04.329 "config": [ 00:13:04.329 { 00:13:04.329 "method": "iobuf_set_options", 00:13:04.329 "params": { 00:13:04.329 "small_pool_count": 8192, 00:13:04.329 "large_pool_count": 1024, 00:13:04.329 "small_bufsize": 8192, 00:13:04.329 "large_bufsize": 135168, 00:13:04.329 "enable_numa": false 00:13:04.329 } 00:13:04.329 } 00:13:04.329 ] 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "subsystem": "sock", 00:13:04.329 "config": [ 00:13:04.329 { 00:13:04.329 "method": "sock_set_default_impl", 00:13:04.329 "params": { 00:13:04.329 "impl_name": "uring" 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "sock_impl_set_options", 00:13:04.329 "params": { 00:13:04.329 "impl_name": "ssl", 00:13:04.329 "recv_buf_size": 4096, 00:13:04.329 "send_buf_size": 4096, 00:13:04.329 "enable_recv_pipe": true, 00:13:04.329 "enable_quickack": false, 00:13:04.329 "enable_placement_id": 0, 00:13:04.329 "enable_zerocopy_send_server": true, 00:13:04.329 "enable_zerocopy_send_client": false, 00:13:04.329 "zerocopy_threshold": 0, 00:13:04.329 "tls_version": 0, 00:13:04.329 "enable_ktls": false 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "sock_impl_set_options", 00:13:04.329 "params": { 00:13:04.329 "impl_name": "posix", 00:13:04.329 "recv_buf_size": 2097152, 00:13:04.329 "send_buf_size": 2097152, 00:13:04.329 "enable_recv_pipe": true, 00:13:04.329 "enable_quickack": false, 00:13:04.329 "enable_placement_id": 0, 00:13:04.329 "enable_zerocopy_send_server": true, 00:13:04.329 "enable_zerocopy_send_client": false, 00:13:04.329 "zerocopy_threshold": 0, 00:13:04.329 "tls_version": 0, 00:13:04.329 "enable_ktls": false 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "sock_impl_set_options", 00:13:04.329 "params": { 00:13:04.329 "impl_name": "uring", 00:13:04.329 "recv_buf_size": 2097152, 00:13:04.329 "send_buf_size": 2097152, 00:13:04.329 "enable_recv_pipe": true, 00:13:04.329 "enable_quickack": false, 00:13:04.329 "enable_placement_id": 0, 00:13:04.329 "enable_zerocopy_send_server": false, 00:13:04.329 "enable_zerocopy_send_client": false, 00:13:04.329 "zerocopy_threshold": 0, 00:13:04.329 "tls_version": 0, 00:13:04.329 "enable_ktls": false 00:13:04.329 } 00:13:04.329 } 00:13:04.329 ] 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "subsystem": "vmd", 00:13:04.329 "config": [] 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "subsystem": "accel", 00:13:04.329 "config": [ 00:13:04.329 { 00:13:04.329 "method": "accel_set_options", 00:13:04.329 "params": { 00:13:04.329 "small_cache_size": 128, 00:13:04.329 "large_cache_size": 16, 00:13:04.329 "task_count": 2048, 00:13:04.329 "sequence_count": 2048, 00:13:04.329 "buf_count": 2048 00:13:04.329 } 00:13:04.329 } 00:13:04.329 ] 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "subsystem": "bdev", 00:13:04.329 "config": [ 00:13:04.329 { 00:13:04.329 "method": "bdev_set_options", 00:13:04.329 "params": { 00:13:04.329 "bdev_io_pool_size": 65535, 00:13:04.329 "bdev_io_cache_size": 256, 00:13:04.329 "bdev_auto_examine": true, 00:13:04.329 "iobuf_small_cache_size": 128, 00:13:04.329 "iobuf_large_cache_size": 16 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "bdev_raid_set_options", 00:13:04.329 "params": { 00:13:04.329 "process_window_size_kb": 1024, 00:13:04.329 "process_max_bandwidth_mb_sec": 0 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "bdev_iscsi_set_options", 00:13:04.329 "params": { 00:13:04.329 "timeout_sec": 30 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "bdev_nvme_set_options", 00:13:04.329 "params": { 00:13:04.329 "action_on_timeout": "none", 00:13:04.329 "timeout_us": 0, 00:13:04.329 "timeout_admin_us": 0, 00:13:04.329 "keep_alive_timeout_ms": 10000, 00:13:04.329 "arbitration_burst": 0, 00:13:04.329 "low_priority_weight": 0, 00:13:04.329 "medium_priority_weight": 0, 00:13:04.329 "high_priority_weight": 0, 00:13:04.329 "nvme_adminq_poll_period_us": 10000, 00:13:04.329 "nvme_ioq_poll_period_us": 0, 00:13:04.329 "io_queue_requests": 0, 00:13:04.329 "delay_cmd_submit": true, 00:13:04.329 "transport_retry_count": 4, 00:13:04.329 "bdev_retry_count": 3, 00:13:04.329 "transport_ack_timeout": 0, 00:13:04.329 "ctrlr_loss_timeout_sec": 0, 00:13:04.329 "reconnect_delay_sec": 0, 00:13:04.329 "fast_io_fail_timeout_sec": 0, 00:13:04.329 "disable_auto_failback": false, 00:13:04.329 "generate_uuids": false, 00:13:04.329 "transport_tos": 0, 00:13:04.329 "nvme_error_stat": false, 00:13:04.329 "rdma_srq_size": 0, 00:13:04.329 "io_path_stat": false, 00:13:04.329 "allow_accel_sequence": false, 00:13:04.329 "rdma_max_cq_size": 0, 00:13:04.329 "rdma_cm_event_timeout_ms": 0, 00:13:04.329 "dhchap_digests": [ 00:13:04.329 "sha256", 00:13:04.329 "sha384", 00:13:04.329 "sha512" 00:13:04.329 ], 00:13:04.329 "dhchap_dhgroups": [ 00:13:04.329 "null", 00:13:04.329 "ffdhe2048", 00:13:04.329 "ffdhe3072", 00:13:04.329 "ffdhe4096", 00:13:04.329 "ffdhe6144", 00:13:04.329 "ffdhe8192" 00:13:04.329 ] 00:13:04.329 } 00:13:04.329 }, 00:13:04.329 { 00:13:04.329 "method": "bdev_nvme_set_hotplug", 00:13:04.329 "params": { 00:13:04.329 "period_us": 100000, 00:13:04.329 "enable": false 00:13:04.329 } 00:13:04.329 }, 00:13:04.330 { 00:13:04.330 "method": "bdev_malloc_create", 00:13:04.330 "params": { 00:13:04.330 "name": "malloc0", 00:13:04.330 "num_blocks": 8192, 00:13:04.330 "block_size": 4096, 00:13:04.330 "physical_block_size": 4096, 00:13:04.330 "uuid": "3d030df8-d532-4844-873b-b7241088f723", 00:13:04.330 "optimal_io_boundary": 0, 00:13:04.330 "md_size": 0, 00:13:04.330 "dif_type": 0, 00:13:04.330 "dif_is_head_of_md": false, 00:13:04.330 "dif_pi_format": 0 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "bdev_wait_for_examine" 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "subsystem": "nbd", 00:13:04.330 "config": [] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "subsystem": "scheduler", 00:13:04.330 "config": [ 00:13:04.330 { 00:13:04.330 "method": "framework_set_scheduler", 00:13:04.330 "params": { 00:13:04.330 "name": "static" 00:13:04.330 } 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "subsystem": "nvmf", 00:13:04.330 "config": [ 00:13:04.330 { 00:13:04.330 "method": "nvmf_set_config", 00:13:04.330 "params": { 00:13:04.330 "discovery_filter": "match_any", 00:13:04.330 "admin_cmd_passthru": { 00:13:04.330 "identify_ctrlr": false 00:13:04.330 }, 00:13:04.330 "dhchap_digests": [ 00:13:04.330 "sha256", 00:13:04.330 "sha384", 00:13:04.330 "sha512" 00:13:04.330 ], 00:13:04.330 "dhchap_dhgroups": [ 00:13:04.330 "null", 00:13:04.330 "ffdhe2048", 00:13:04.330 "ffdhe3072", 00:13:04.330 "ffdhe4096", 00:13:04.330 "ffdhe6144", 00:13:04.330 "ffdhe8192" 00:13:04.330 ] 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_set_max_subsystems", 00:13:04.330 "params": { 00:13:04.330 "max_subsystems": 1024 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_set_crdt", 00:13:04.330 "params": { 00:13:04.330 "crdt1": 0, 00:13:04.330 "crdt2": 0, 00:13:04.330 "crdt3": 0 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_create_transport", 00:13:04.330 "params": { 00:13:04.330 "trtype": "TCP", 00:13:04.330 "max_queue_depth": 128, 00:13:04.330 "max_io_qpairs_per_ctrlr": 127, 00:13:04.330 "in_capsule_data_size": 4096, 00:13:04.330 "max_io_size": 131072, 00:13:04.330 "io_unit_size": 131072, 00:13:04.330 "max_aq_depth": 128, 00:13:04.330 "num_shared_buffers": 511, 00:13:04.330 "buf_cache_size": 4294967295, 00:13:04.330 "dif_insert_or_strip": false, 00:13:04.330 "zcopy": false, 00:13:04.330 "c2h_success": false, 00:13:04.330 "sock_priority": 0, 00:13:04.330 "abort_timeout_sec": 1, 00:13:04.330 "ack_timeout": 0, 00:13:04.330 "data_wr_pool_size": 0 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_create_subsystem", 00:13:04.330 "params": { 00:13:04.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.330 "allow_any_host": false, 00:13:04.330 "serial_number": "SPDK00000000000001", 00:13:04.330 "model_number": "SPDK bdev Controller", 00:13:04.330 "max_namespaces": 10, 00:13:04.330 "min_cntlid": 1, 00:13:04.330 "max_cntlid": 65519, 00:13:04.330 "ana_reporting": false 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_subsystem_add_host", 00:13:04.330 "params": { 00:13:04.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.330 "host": "nqn.2016-06.io.spdk:host1", 00:13:04.330 "psk": "key0" 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_subsystem_add_ns", 00:13:04.330 "params": { 00:13:04.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.330 "namespace": { 00:13:04.330 "nsid": 1, 00:13:04.330 "bdev_name": "malloc0", 00:13:04.330 "nguid": "3D030DF8D5324844873BB7241088F723", 00:13:04.330 "uuid": "3d030df8-d532-4844-873b-b7241088f723", 00:13:04.330 "no_auto_visible": false 00:13:04.330 } 00:13:04.330 } 00:13:04.330 }, 00:13:04.330 { 00:13:04.330 "method": "nvmf_subsystem_add_listener", 00:13:04.330 "params": { 00:13:04.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.330 "listen_address": { 00:13:04.330 "trtype": "TCP", 00:13:04.330 "adrfam": "IPv4", 00:13:04.330 "traddr": "10.0.0.3", 00:13:04.330 "trsvcid": "4420" 00:13:04.330 }, 00:13:04.330 "secure_channel": true 00:13:04.330 } 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 } 00:13:04.330 ] 00:13:04.330 }' 00:13:04.330 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:04.590 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:04.590 "subsystems": [ 00:13:04.590 { 00:13:04.590 "subsystem": "keyring", 00:13:04.590 "config": [ 00:13:04.590 { 00:13:04.590 "method": "keyring_file_add_key", 00:13:04.590 "params": { 00:13:04.590 "name": "key0", 00:13:04.590 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:04.590 } 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "subsystem": "iobuf", 00:13:04.590 "config": [ 00:13:04.590 { 00:13:04.590 "method": "iobuf_set_options", 00:13:04.590 "params": { 00:13:04.590 "small_pool_count": 8192, 00:13:04.590 "large_pool_count": 1024, 00:13:04.590 "small_bufsize": 8192, 00:13:04.590 "large_bufsize": 135168, 00:13:04.590 "enable_numa": false 00:13:04.590 } 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "subsystem": "sock", 00:13:04.590 "config": [ 00:13:04.590 { 00:13:04.590 "method": "sock_set_default_impl", 00:13:04.590 "params": { 00:13:04.590 "impl_name": "uring" 00:13:04.590 } 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "method": "sock_impl_set_options", 00:13:04.590 "params": { 00:13:04.590 "impl_name": "ssl", 00:13:04.590 "recv_buf_size": 4096, 00:13:04.590 "send_buf_size": 4096, 00:13:04.590 "enable_recv_pipe": true, 00:13:04.590 "enable_quickack": false, 00:13:04.590 "enable_placement_id": 0, 00:13:04.590 "enable_zerocopy_send_server": true, 00:13:04.590 "enable_zerocopy_send_client": false, 00:13:04.590 "zerocopy_threshold": 0, 00:13:04.590 "tls_version": 0, 00:13:04.590 "enable_ktls": false 00:13:04.590 } 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "method": "sock_impl_set_options", 00:13:04.590 "params": { 00:13:04.590 "impl_name": "posix", 00:13:04.590 "recv_buf_size": 2097152, 00:13:04.590 "send_buf_size": 2097152, 00:13:04.590 "enable_recv_pipe": true, 00:13:04.590 "enable_quickack": false, 00:13:04.590 "enable_placement_id": 0, 00:13:04.590 "enable_zerocopy_send_server": true, 00:13:04.590 "enable_zerocopy_send_client": false, 00:13:04.590 "zerocopy_threshold": 0, 00:13:04.590 "tls_version": 0, 00:13:04.590 "enable_ktls": false 00:13:04.590 } 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "method": "sock_impl_set_options", 00:13:04.590 "params": { 00:13:04.590 "impl_name": "uring", 00:13:04.590 "recv_buf_size": 2097152, 00:13:04.590 "send_buf_size": 2097152, 00:13:04.590 "enable_recv_pipe": true, 00:13:04.590 "enable_quickack": false, 00:13:04.590 "enable_placement_id": 0, 00:13:04.590 "enable_zerocopy_send_server": false, 00:13:04.590 "enable_zerocopy_send_client": false, 00:13:04.590 "zerocopy_threshold": 0, 00:13:04.590 "tls_version": 0, 00:13:04.590 "enable_ktls": false 00:13:04.590 } 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "subsystem": "vmd", 00:13:04.590 "config": [] 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "subsystem": "accel", 00:13:04.590 "config": [ 00:13:04.590 { 00:13:04.590 "method": "accel_set_options", 00:13:04.590 "params": { 00:13:04.590 "small_cache_size": 128, 00:13:04.590 "large_cache_size": 16, 00:13:04.590 "task_count": 2048, 00:13:04.590 "sequence_count": 2048, 00:13:04.590 "buf_count": 2048 00:13:04.590 } 00:13:04.590 } 00:13:04.590 ] 00:13:04.590 }, 00:13:04.590 { 00:13:04.590 "subsystem": "bdev", 00:13:04.590 "config": [ 00:13:04.590 { 00:13:04.590 "method": "bdev_set_options", 00:13:04.590 "params": { 00:13:04.591 "bdev_io_pool_size": 65535, 00:13:04.591 "bdev_io_cache_size": 256, 00:13:04.591 "bdev_auto_examine": true, 00:13:04.591 "iobuf_small_cache_size": 128, 00:13:04.591 "iobuf_large_cache_size": 16 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_raid_set_options", 00:13:04.591 "params": { 00:13:04.591 "process_window_size_kb": 1024, 00:13:04.591 "process_max_bandwidth_mb_sec": 0 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_iscsi_set_options", 00:13:04.591 "params": { 00:13:04.591 "timeout_sec": 30 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_nvme_set_options", 00:13:04.591 "params": { 00:13:04.591 "action_on_timeout": "none", 00:13:04.591 "timeout_us": 0, 00:13:04.591 "timeout_admin_us": 0, 00:13:04.591 "keep_alive_timeout_ms": 10000, 00:13:04.591 "arbitration_burst": 0, 00:13:04.591 "low_priority_weight": 0, 00:13:04.591 "medium_priority_weight": 0, 00:13:04.591 "high_priority_weight": 0, 00:13:04.591 "nvme_adminq_poll_period_us": 10000, 00:13:04.591 "nvme_ioq_poll_period_us": 0, 00:13:04.591 "io_queue_requests": 512, 00:13:04.591 "delay_cmd_submit": true, 00:13:04.591 "transport_retry_count": 4, 00:13:04.591 "bdev_retry_count": 3, 00:13:04.591 "transport_ack_timeout": 0, 00:13:04.591 "ctrlr_loss_timeout_sec": 0, 00:13:04.591 "reconnect_delay_sec": 0, 00:13:04.591 "fast_io_fail_timeout_sec": 0, 00:13:04.591 "disable_auto_failback": false, 00:13:04.591 "generate_uuids": false, 00:13:04.591 "transport_tos": 0, 00:13:04.591 "nvme_error_stat": false, 00:13:04.591 "rdma_srq_size": 0, 00:13:04.591 "io_path_stat": false, 00:13:04.591 "allow_accel_sequence": false, 00:13:04.591 "rdma_max_cq_size": 0, 00:13:04.591 "rdma_cm_event_timeout_ms": 0, 00:13:04.591 "dhchap_digests": [ 00:13:04.591 "sha256", 00:13:04.591 "sha384", 00:13:04.591 "sha512" 00:13:04.591 ], 00:13:04.591 "dhchap_dhgroups": [ 00:13:04.591 "null", 00:13:04.591 "ffdhe2048", 00:13:04.591 "ffdhe3072", 00:13:04.591 "ffdhe4096", 00:13:04.591 "ffdhe6144", 00:13:04.591 "ffdhe8192" 00:13:04.591 ] 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_nvme_attach_controller", 00:13:04.591 "params": { 00:13:04.591 "name": "TLSTEST", 00:13:04.591 "trtype": "TCP", 00:13:04.591 "adrfam": "IPv4", 00:13:04.591 "traddr": "10.0.0.3", 00:13:04.591 "trsvcid": "4420", 00:13:04.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.591 "prchk_reftag": false, 00:13:04.591 "prchk_guard": false, 00:13:04.591 "ctrlr_loss_timeout_sec": 0, 00:13:04.591 "reconnect_delay_sec": 0, 00:13:04.591 "fast_io_fail_timeout_sec": 0, 00:13:04.591 "psk": "key0", 00:13:04.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:04.591 "hdgst": false, 00:13:04.591 "ddgst": false, 00:13:04.591 "multipath": "multipath" 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_nvme_set_hotplug", 00:13:04.591 "params": { 00:13:04.591 "period_us": 100000, 00:13:04.591 "enable": false 00:13:04.591 } 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "method": "bdev_wait_for_examine" 00:13:04.591 } 00:13:04.591 ] 00:13:04.591 }, 00:13:04.591 { 00:13:04.591 "subsystem": "nbd", 00:13:04.591 "config": [] 00:13:04.591 } 00:13:04.591 ] 00:13:04.591 }' 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71843 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71843 ']' 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71843 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71843 00:13:04.591 killing process with pid 71843 00:13:04.591 Received shutdown signal, test time was about 10.000000 seconds 00:13:04.591 00:13:04.591 Latency(us) 00:13:04.591 [2024-11-17T13:22:53.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.591 [2024-11-17T13:22:53.815Z] =================================================================================================================== 00:13:04.591 [2024-11-17T13:22:53.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71843' 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71843 00:13:04.591 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71843 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71786 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71786 ']' 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71786 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71786 00:13:04.851 killing process with pid 71786 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71786' 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71786 00:13:04.851 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71786 00:13:05.111 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:05.111 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.111 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.111 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:05.111 "subsystems": [ 00:13:05.111 { 00:13:05.111 "subsystem": "keyring", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "keyring_file_add_key", 00:13:05.111 "params": { 00:13:05.111 "name": "key0", 00:13:05.111 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:05.111 } 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "iobuf", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "iobuf_set_options", 00:13:05.111 "params": { 00:13:05.111 "small_pool_count": 8192, 00:13:05.111 "large_pool_count": 1024, 00:13:05.111 "small_bufsize": 8192, 00:13:05.111 "large_bufsize": 135168, 00:13:05.111 "enable_numa": false 00:13:05.111 } 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "sock", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "sock_set_default_impl", 00:13:05.111 "params": { 00:13:05.111 "impl_name": "uring" 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "sock_impl_set_options", 00:13:05.111 "params": { 00:13:05.111 "impl_name": "ssl", 00:13:05.111 "recv_buf_size": 4096, 00:13:05.111 "send_buf_size": 4096, 00:13:05.111 "enable_recv_pipe": true, 00:13:05.111 "enable_quickack": false, 00:13:05.111 "enable_placement_id": 0, 00:13:05.111 "enable_zerocopy_send_server": true, 00:13:05.111 "enable_zerocopy_send_client": false, 00:13:05.111 "zerocopy_threshold": 0, 00:13:05.111 "tls_version": 0, 00:13:05.111 "enable_ktls": false 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "sock_impl_set_options", 00:13:05.111 "params": { 00:13:05.111 "impl_name": "posix", 00:13:05.111 "recv_buf_size": 2097152, 00:13:05.111 "send_buf_size": 2097152, 00:13:05.111 "enable_recv_pipe": true, 00:13:05.111 "enable_quickack": false, 00:13:05.111 "enable_placement_id": 0, 00:13:05.111 "enable_zerocopy_send_server": true, 00:13:05.111 "enable_zerocopy_send_client": false, 00:13:05.111 "zerocopy_threshold": 0, 00:13:05.111 "tls_version": 0, 00:13:05.111 "enable_ktls": false 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "sock_impl_set_options", 00:13:05.111 "params": { 00:13:05.111 "impl_name": "uring", 00:13:05.111 "recv_buf_size": 2097152, 00:13:05.111 "send_buf_size": 2097152, 00:13:05.111 "enable_recv_pipe": true, 00:13:05.111 "enable_quickack": false, 00:13:05.111 "enable_placement_id": 0, 00:13:05.111 "enable_zerocopy_send_server": false, 00:13:05.111 "enable_zerocopy_send_client": false, 00:13:05.111 "zerocopy_threshold": 0, 00:13:05.111 "tls_version": 0, 00:13:05.111 "enable_ktls": false 00:13:05.111 } 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "vmd", 00:13:05.111 "config": [] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "accel", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "accel_set_options", 00:13:05.111 "params": { 00:13:05.111 "small_cache_size": 128, 00:13:05.111 "large_cache_size": 16, 00:13:05.111 "task_count": 2048, 00:13:05.111 "sequence_count": 2048, 00:13:05.111 "buf_count": 2048 00:13:05.111 } 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "bdev", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "bdev_set_options", 00:13:05.111 "params": { 00:13:05.111 "bdev_io_pool_size": 65535, 00:13:05.111 "bdev_io_cache_size": 256, 00:13:05.111 "bdev_auto_examine": true, 00:13:05.111 "iobuf_small_cache_size": 128, 00:13:05.111 "iobuf_large_cache_size": 16 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_raid_set_options", 00:13:05.111 "params": { 00:13:05.111 "process_window_size_kb": 1024, 00:13:05.111 "process_max_bandwidth_mb_sec": 0 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_iscsi_set_options", 00:13:05.111 "params": { 00:13:05.111 "timeout_sec": 30 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_nvme_set_options", 00:13:05.111 "params": { 00:13:05.111 "action_on_timeout": "none", 00:13:05.111 "timeout_us": 0, 00:13:05.111 "timeout_admin_us": 0, 00:13:05.111 "keep_alive_timeout_ms": 10000, 00:13:05.111 "arbitration_burst": 0, 00:13:05.111 "low_priority_weight": 0, 00:13:05.111 "medium_priority_weight": 0, 00:13:05.111 "high_priority_weight": 0, 00:13:05.111 "nvme_adminq_poll_period_us": 10000, 00:13:05.111 "nvme_ioq_poll_period_us": 0, 00:13:05.111 "io_queue_requests": 0, 00:13:05.111 "delay_cmd_submit": true, 00:13:05.111 "transport_retry_count": 4, 00:13:05.111 "bdev_retry_count": 3, 00:13:05.111 "transport_ack_timeout": 0, 00:13:05.111 "ctrlr_loss_timeout_sec": 0, 00:13:05.111 "reconnect_delay_sec": 0, 00:13:05.111 "fast_io_fail_timeout_sec": 0, 00:13:05.111 "disable_auto_failback": false, 00:13:05.111 "generate_uuids": false, 00:13:05.111 "transport_tos": 0, 00:13:05.111 "nvme_error_stat": false, 00:13:05.111 "rdma_srq_size": 0, 00:13:05.111 "io_path_stat": false, 00:13:05.111 "allow_accel_sequence": false, 00:13:05.111 "rdma_max_cq_size": 0, 00:13:05.111 "rdma_cm_event_timeout_ms": 0, 00:13:05.111 "dhchap_digests": [ 00:13:05.111 "sha256", 00:13:05.111 "sha384", 00:13:05.111 "sha512" 00:13:05.111 ], 00:13:05.111 "dhchap_dhgroups": [ 00:13:05.111 "null", 00:13:05.111 "ffdhe2048", 00:13:05.111 "ffdhe3072", 00:13:05.111 "ffdhe4096", 00:13:05.111 "ffdhe6144", 00:13:05.111 "ffdhe8192" 00:13:05.111 ] 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_nvme_set_hotplug", 00:13:05.111 "params": { 00:13:05.111 "period_us": 100000, 00:13:05.111 "enable": false 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_malloc_create", 00:13:05.111 "params": { 00:13:05.111 "name": "malloc0", 00:13:05.111 "num_blocks": 8192, 00:13:05.111 "block_size": 4096, 00:13:05.111 "physical_block_size": 4096, 00:13:05.111 "uuid": "3d030df8-d532-4844-873b-b7241088f723", 00:13:05.111 "optimal_io_boundary": 0, 00:13:05.111 "md_size": 0, 00:13:05.111 "dif_type": 0, 00:13:05.111 "dif_is_head_of_md": false, 00:13:05.111 "dif_pi_format": 0 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "bdev_wait_for_examine" 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "nbd", 00:13:05.111 "config": [] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "scheduler", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "framework_set_scheduler", 00:13:05.111 "params": { 00:13:05.111 "name": "static" 00:13:05.111 } 00:13:05.111 } 00:13:05.111 ] 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "subsystem": "nvmf", 00:13:05.111 "config": [ 00:13:05.111 { 00:13:05.111 "method": "nvmf_set_config", 00:13:05.111 "params": { 00:13:05.111 "discovery_filter": "match_any", 00:13:05.111 "admin_cmd_passthru": { 00:13:05.111 "identify_ctrlr": false 00:13:05.111 }, 00:13:05.111 "dhchap_digests": [ 00:13:05.111 "sha256", 00:13:05.111 "sha384", 00:13:05.111 "sha512" 00:13:05.111 ], 00:13:05.111 "dhchap_dhgroups": [ 00:13:05.111 "null", 00:13:05.111 "ffdhe2048", 00:13:05.111 "ffdhe3072", 00:13:05.111 "ffdhe4096", 00:13:05.111 "ffdhe6144", 00:13:05.111 "ffdhe8192" 00:13:05.111 ] 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "nvmf_set_max_subsystems", 00:13:05.111 "params": { 00:13:05.111 "max_subsystems": 1024 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "nvmf_set_crdt", 00:13:05.111 "params": { 00:13:05.111 "crdt1": 0, 00:13:05.111 "crdt2": 0, 00:13:05.111 "crdt3": 0 00:13:05.111 } 00:13:05.111 }, 00:13:05.111 { 00:13:05.111 "method": "nvmf_create_transport", 00:13:05.111 "params": { 00:13:05.111 "trtype": "TCP", 00:13:05.111 "max_queue_depth": 128, 00:13:05.111 "max_io_qpairs_per_ctrlr": 127, 00:13:05.111 "in_capsule_data_size": 4096, 00:13:05.111 "max_io_size": 131072, 00:13:05.111 "io_unit_size": 131072, 00:13:05.111 "max_aq_depth": 128, 00:13:05.111 "num_shared_buffers": 511, 00:13:05.111 "buf_cache_size": 4294967295, 00:13:05.112 "dif_insert_or_strip": false, 00:13:05.112 "zcopy": false, 00:13:05.112 "c2h_success": false, 00:13:05.112 "sock_priority": 0, 00:13:05.112 "abort_timeout_sec": 1, 00:13:05.112 "ack_timeout": 0, 00:13:05.112 "data_wr_pool_size": 0 00:13:05.112 } 00:13:05.112 }, 00:13:05.112 { 00:13:05.112 "method": "nvmf_create_subsystem", 00:13:05.112 "params": { 00:13:05.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.112 "allow_any_host": false, 00:13:05.112 "serial_number": "SPDK00000000000001", 00:13:05.112 "model_number": "SPDK bdev Controller", 00:13:05.112 "max_namespaces": 10, 00:13:05.112 "min_cntlid": 1, 00:13:05.112 "max_cntlid": 65519, 00:13:05.112 "ana_reporting": false 00:13:05.112 } 00:13:05.112 }, 00:13:05.112 { 00:13:05.112 "method": "nvmf_subsystem_add_host", 00:13:05.112 "params": { 00:13:05.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.112 "host": "nqn.2016-06.io.spdk:host1", 00:13:05.112 "psk": "key0" 00:13:05.112 } 00:13:05.112 }, 00:13:05.112 { 00:13:05.112 "method": "nvmf_subsystem_add_ns", 00:13:05.112 "params": { 00:13:05.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.112 "namespace": { 00:13:05.112 "nsid": 1, 00:13:05.112 "bdev_name": "malloc0", 00:13:05.112 "nguid": "3D030DF8D5324844873BB7241088F723", 00:13:05.112 "uuid": "3d030df8-d532-4844-873b-b7241088f723", 00:13:05.112 "no_auto_visible": false 00:13:05.112 } 00:13:05.112 } 00:13:05.112 }, 00:13:05.112 { 00:13:05.112 "method": "nvmf_subsystem_add_listener", 00:13:05.112 "params": { 00:13:05.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.112 "listen_address": { 00:13:05.112 "trtype": "TCP", 00:13:05.112 "adrfam": "IPv4", 00:13:05.112 "traddr": "10.0.0.3", 00:13:05.112 "trsvcid": "4420" 00:13:05.112 }, 00:13:05.112 "secure_channel": true 00:13:05.112 } 00:13:05.112 } 00:13:05.112 ] 00:13:05.112 } 00:13:05.112 ] 00:13:05.112 }' 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:05.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71888 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71888 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71888 ']' 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.112 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:05.112 [2024-11-17 13:22:54.159370] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:05.112 [2024-11-17 13:22:54.159627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.112 [2024-11-17 13:22:54.302298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.371 [2024-11-17 13:22:54.339697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.371 [2024-11-17 13:22:54.339755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.371 [2024-11-17 13:22:54.339795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.371 [2024-11-17 13:22:54.339802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.371 [2024-11-17 13:22:54.339809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.371 [2024-11-17 13:22:54.340244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.371 [2024-11-17 13:22:54.507161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:05.371 [2024-11-17 13:22:54.582639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.630 [2024-11-17 13:22:54.614578] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:05.630 [2024-11-17 13:22:54.614956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.889 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.889 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:05.889 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.889 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:05.889 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71926 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71926 /var/tmp/bdevperf.sock 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71926 ']' 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:06.148 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:06.148 "subsystems": [ 00:13:06.148 { 00:13:06.148 "subsystem": "keyring", 00:13:06.148 "config": [ 00:13:06.148 { 00:13:06.148 "method": "keyring_file_add_key", 00:13:06.148 "params": { 00:13:06.148 "name": "key0", 00:13:06.148 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:06.148 } 00:13:06.148 } 00:13:06.148 ] 00:13:06.148 }, 00:13:06.148 { 00:13:06.148 "subsystem": "iobuf", 00:13:06.148 "config": [ 00:13:06.149 { 00:13:06.149 "method": "iobuf_set_options", 00:13:06.149 "params": { 00:13:06.149 "small_pool_count": 8192, 00:13:06.149 "large_pool_count": 1024, 00:13:06.149 "small_bufsize": 8192, 00:13:06.149 "large_bufsize": 135168, 00:13:06.149 "enable_numa": false 00:13:06.149 } 00:13:06.149 } 00:13:06.149 ] 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "subsystem": "sock", 00:13:06.149 "config": [ 00:13:06.149 { 00:13:06.149 "method": "sock_set_default_impl", 00:13:06.149 "params": { 00:13:06.149 "impl_name": "uring" 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "sock_impl_set_options", 00:13:06.149 "params": { 00:13:06.149 "impl_name": "ssl", 00:13:06.149 "recv_buf_size": 4096, 00:13:06.149 "send_buf_size": 4096, 00:13:06.149 "enable_recv_pipe": true, 00:13:06.149 "enable_quickack": false, 00:13:06.149 "enable_placement_id": 0, 00:13:06.149 "enable_zerocopy_send_server": true, 00:13:06.149 "enable_zerocopy_send_client": false, 00:13:06.149 "zerocopy_threshold": 0, 00:13:06.149 "tls_version": 0, 00:13:06.149 "enable_ktls": false 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "sock_impl_set_options", 00:13:06.149 "params": { 00:13:06.149 "impl_name": "posix", 00:13:06.149 "recv_buf_size": 2097152, 00:13:06.149 "send_buf_size": 2097152, 00:13:06.149 "enable_recv_pipe": true, 00:13:06.149 "enable_quickack": false, 00:13:06.149 "enable_placement_id": 0, 00:13:06.149 "enable_zerocopy_send_server": true, 00:13:06.149 "enable_zerocopy_send_client": false, 00:13:06.149 "zerocopy_threshold": 0, 00:13:06.149 "tls_version": 0, 00:13:06.149 "enable_ktls": false 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "sock_impl_set_options", 00:13:06.149 "params": { 00:13:06.149 "impl_name": "uring", 00:13:06.149 "recv_buf_size": 2097152, 00:13:06.149 "send_buf_size": 2097152, 00:13:06.149 "enable_recv_pipe": true, 00:13:06.149 "enable_quickack": false, 00:13:06.149 "enable_placement_id": 0, 00:13:06.149 "enable_zerocopy_send_server": false, 00:13:06.149 "enable_zerocopy_send_client": false, 00:13:06.149 "zerocopy_threshold": 0, 00:13:06.149 "tls_version": 0, 00:13:06.149 "enable_ktls": false 00:13:06.149 } 00:13:06.149 } 00:13:06.149 ] 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "subsystem": "vmd", 00:13:06.149 "config": [] 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "subsystem": "accel", 00:13:06.149 "config": [ 00:13:06.149 { 00:13:06.149 "method": "accel_set_options", 00:13:06.149 "params": { 00:13:06.149 "small_cache_size": 128, 00:13:06.149 "large_cache_size": 16, 00:13:06.149 "task_count": 2048, 00:13:06.149 "sequence_count": 2048, 00:13:06.149 "buf_count": 2048 00:13:06.149 } 00:13:06.149 } 00:13:06.149 ] 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "subsystem": "bdev", 00:13:06.149 "config": [ 00:13:06.149 { 00:13:06.149 "method": "bdev_set_options", 00:13:06.149 "params": { 00:13:06.149 "bdev_io_pool_size": 65535, 00:13:06.149 "bdev_io_cache_size": 256, 00:13:06.149 "bdev_auto_examine": true, 00:13:06.149 "iobuf_small_cache_size": 128, 00:13:06.149 "iobuf_large_cache_size": 16 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_raid_set_options", 00:13:06.149 "params": { 00:13:06.149 "process_window_size_kb": 1024, 00:13:06.149 "process_max_bandwidth_mb_sec": 0 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_iscsi_set_options", 00:13:06.149 "params": { 00:13:06.149 "timeout_sec": 30 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_nvme_set_options", 00:13:06.149 "params": { 00:13:06.149 "action_on_timeout": "none", 00:13:06.149 "timeout_us": 0, 00:13:06.149 "timeout_admin_us": 0, 00:13:06.149 "keep_alive_timeout_ms": 10000, 00:13:06.149 "arbitration_burst": 0, 00:13:06.149 "low_priority_weight": 0, 00:13:06.149 "medium_priority_weight": 0, 00:13:06.149 "high_priority_weight": 0, 00:13:06.149 "nvme_adminq_poll_period_us": 10000, 00:13:06.149 "nvme_ioq_poll_period_us": 0, 00:13:06.149 "io_queue_requests": 512, 00:13:06.149 "delay_cmd_submit": true, 00:13:06.149 "transport_retry_count": 4, 00:13:06.149 "bdev_retry_count": 3, 00:13:06.149 "transport_ack_timeout": 0, 00:13:06.149 "ctrlr_loss_timeout_sec": 0, 00:13:06.149 "reconnect_delay_sec": 0, 00:13:06.149 "fast_io_fail_timeout_sec": 0, 00:13:06.149 "disable_auto_failback": false, 00:13:06.149 "generate_uuids": false, 00:13:06.149 "transport_tos": 0, 00:13:06.149 "nvme_error_stat": false, 00:13:06.149 "rdma_srq_size": 0, 00:13:06.149 "io_path_stat": false, 00:13:06.149 "allow_accel_sequence": false, 00:13:06.149 "rdma_max_cq_size": 0, 00:13:06.149 "rdma_cm_event_timeout_ms": 0, 00:13:06.149 "dhchap_digests": [ 00:13:06.149 "sha256", 00:13:06.149 "sha384", 00:13:06.149 "sha512" 00:13:06.149 ], 00:13:06.149 "dhchap_dhgroups": [ 00:13:06.149 "null", 00:13:06.149 "ffdhe2048", 00:13:06.149 "ffdhe3072", 00:13:06.149 "ffdhe4096", 00:13:06.149 "ffdhe6144", 00:13:06.149 "ffdhe8192" 00:13:06.149 ] 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_nvme_attach_controller", 00:13:06.149 "params": { 00:13:06.149 "name": "TLSTEST", 00:13:06.149 "trtype": "TCP", 00:13:06.149 "adrfam": "IPv4", 00:13:06.149 "traddr": "10.0.0.3", 00:13:06.149 "trsvcid": "4420", 00:13:06.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.149 "prchk_reftag": false, 00:13:06.149 "prchk_guard": false, 00:13:06.149 "ctrlr_loss_timeout_sec": 0, 00:13:06.149 "reconnect_delay_sec": 0, 00:13:06.149 "fast_io_fail_timeout_sec": 0, 00:13:06.149 "psk": "key0", 00:13:06.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.149 "hdgst": false, 00:13:06.149 "ddgst": false, 00:13:06.149 "multipath": "multipath" 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_nvme_set_hotplug", 00:13:06.149 "params": { 00:13:06.149 "period_us": 100000, 00:13:06.149 "enable": false 00:13:06.149 } 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "method": "bdev_wait_for_examine" 00:13:06.149 } 00:13:06.149 ] 00:13:06.149 }, 00:13:06.149 { 00:13:06.149 "subsystem": "nbd", 00:13:06.149 "config": [] 00:13:06.149 } 00:13:06.149 ] 00:13:06.149 }' 00:13:06.149 [2024-11-17 13:22:55.205128] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:06.149 [2024-11-17 13:22:55.205914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:13:06.149 [2024-11-17 13:22:55.349827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.409 [2024-11-17 13:22:55.399041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.409 [2024-11-17 13:22:55.551169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:06.409 [2024-11-17 13:22:55.609003] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:06.976 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.977 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:06.977 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:07.236 Running I/O for 10 seconds... 00:13:09.105 4446.00 IOPS, 17.37 MiB/s [2024-11-17T13:22:59.706Z] 4480.00 IOPS, 17.50 MiB/s [2024-11-17T13:23:00.642Z] 4480.00 IOPS, 17.50 MiB/s [2024-11-17T13:23:01.580Z] 4546.00 IOPS, 17.76 MiB/s [2024-11-17T13:23:02.516Z] 4583.00 IOPS, 17.90 MiB/s [2024-11-17T13:23:03.468Z] 4605.00 IOPS, 17.99 MiB/s [2024-11-17T13:23:04.434Z] 4618.14 IOPS, 18.04 MiB/s [2024-11-17T13:23:05.369Z] 4627.25 IOPS, 18.08 MiB/s [2024-11-17T13:23:06.306Z] 4633.33 IOPS, 18.10 MiB/s [2024-11-17T13:23:06.306Z] 4639.80 IOPS, 18.12 MiB/s 00:13:17.082 Latency(us) 00:13:17.082 [2024-11-17T13:23:06.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:17.082 Verification LBA range: start 0x0 length 0x2000 00:13:17.082 TLSTESTn1 : 10.01 4645.48 18.15 0.00 0.00 27506.45 5630.14 28240.06 00:13:17.082 [2024-11-17T13:23:06.306Z] =================================================================================================================== 00:13:17.082 [2024-11-17T13:23:06.306Z] Total : 4645.48 18.15 0.00 0.00 27506.45 5630.14 28240.06 00:13:17.082 { 00:13:17.082 "results": [ 00:13:17.082 { 00:13:17.082 "job": "TLSTESTn1", 00:13:17.082 "core_mask": "0x4", 00:13:17.082 "workload": "verify", 00:13:17.082 "status": "finished", 00:13:17.082 "verify_range": { 00:13:17.082 "start": 0, 00:13:17.082 "length": 8192 00:13:17.082 }, 00:13:17.082 "queue_depth": 128, 00:13:17.082 "io_size": 4096, 00:13:17.082 "runtime": 10.01469, 00:13:17.082 "iops": 4645.475796055594, 00:13:17.082 "mibps": 18.146389828342166, 00:13:17.082 "io_failed": 0, 00:13:17.082 "io_timeout": 0, 00:13:17.082 "avg_latency_us": 27506.449873122387, 00:13:17.082 "min_latency_us": 5630.138181818182, 00:13:17.082 "max_latency_us": 28240.05818181818 00:13:17.082 } 00:13:17.082 ], 00:13:17.082 "core_count": 1 00:13:17.082 } 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71926 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71926 ']' 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71926 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71926 00:13:17.341 killing process with pid 71926 00:13:17.341 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.341 00:13:17.341 Latency(us) 00:13:17.341 [2024-11-17T13:23:06.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.341 [2024-11-17T13:23:06.565Z] =================================================================================================================== 00:13:17.341 [2024-11-17T13:23:06.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71926' 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71926 00:13:17.341 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71926 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71888 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71888 ']' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71888 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71888 00:13:17.601 killing process with pid 71888 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71888' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71888 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71888 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72059 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72059 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72059 ']' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.601 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.859 [2024-11-17 13:23:06.853632] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:17.859 [2024-11-17 13:23:06.853716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.859 [2024-11-17 13:23:06.995424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.859 [2024-11-17 13:23:07.041751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.859 [2024-11-17 13:23:07.042036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.859 [2024-11-17 13:23:07.042075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.859 [2024-11-17 13:23:07.042084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.859 [2024-11-17 13:23:07.042091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.859 [2024-11-17 13:23:07.042484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.118 [2024-11-17 13:23:07.096083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.4dOTcdtkO4 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4dOTcdtkO4 00:13:18.118 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:18.377 [2024-11-17 13:23:07.509238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.377 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:18.636 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:18.894 [2024-11-17 13:23:08.065335] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:18.894 [2024-11-17 13:23:08.065536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:18.894 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:19.153 malloc0 00:13:19.153 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:19.720 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:13:19.720 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:19.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72107 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72107 /var/tmp/bdevperf.sock 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72107 ']' 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.979 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.979 [2024-11-17 13:23:09.177452] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:19.979 [2024-11-17 13:23:09.177776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72107 ] 00:13:20.238 [2024-11-17 13:23:09.323622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.238 [2024-11-17 13:23:09.385321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.238 [2024-11-17 13:23:09.437162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:21.173 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.173 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:21.173 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:13:21.173 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:21.431 [2024-11-17 13:23:10.516800] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.431 nvme0n1 00:13:21.431 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:21.690 Running I/O for 1 seconds... 00:13:22.624 4605.00 IOPS, 17.99 MiB/s 00:13:22.624 Latency(us) 00:13:22.624 [2024-11-17T13:23:11.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.624 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.624 Verification LBA range: start 0x0 length 0x2000 00:13:22.624 nvme0n1 : 1.01 4664.20 18.22 0.00 0.00 27234.13 5779.08 23235.49 00:13:22.624 [2024-11-17T13:23:11.848Z] =================================================================================================================== 00:13:22.624 [2024-11-17T13:23:11.848Z] Total : 4664.20 18.22 0.00 0.00 27234.13 5779.08 23235.49 00:13:22.624 { 00:13:22.624 "results": [ 00:13:22.624 { 00:13:22.624 "job": "nvme0n1", 00:13:22.624 "core_mask": "0x2", 00:13:22.624 "workload": "verify", 00:13:22.624 "status": "finished", 00:13:22.624 "verify_range": { 00:13:22.624 "start": 0, 00:13:22.624 "length": 8192 00:13:22.624 }, 00:13:22.624 "queue_depth": 128, 00:13:22.624 "io_size": 4096, 00:13:22.624 "runtime": 1.014751, 00:13:22.624 "iops": 4664.198409264933, 00:13:22.624 "mibps": 18.219525036191143, 00:13:22.624 "io_failed": 0, 00:13:22.624 "io_timeout": 0, 00:13:22.624 "avg_latency_us": 27234.127219714577, 00:13:22.624 "min_latency_us": 5779.083636363636, 00:13:22.624 "max_latency_us": 23235.49090909091 00:13:22.624 } 00:13:22.624 ], 00:13:22.624 "core_count": 1 00:13:22.624 } 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72107 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72107 ']' 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72107 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72107 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:22.624 killing process with pid 72107 00:13:22.624 Received shutdown signal, test time was about 1.000000 seconds 00:13:22.624 00:13:22.624 Latency(us) 00:13:22.624 [2024-11-17T13:23:11.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.624 [2024-11-17T13:23:11.848Z] =================================================================================================================== 00:13:22.624 [2024-11-17T13:23:11.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72107' 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72107 00:13:22.624 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72107 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72059 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72059 ']' 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72059 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.883 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72059 00:13:22.883 killing process with pid 72059 00:13:22.883 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.883 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.883 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72059' 00:13:22.883 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72059 00:13:22.883 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72059 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72158 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72158 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72158 ']' 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.142 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.142 [2024-11-17 13:23:12.317800] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:23.142 [2024-11-17 13:23:12.317869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.401 [2024-11-17 13:23:12.455706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.401 [2024-11-17 13:23:12.498280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.401 [2024-11-17 13:23:12.498338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.401 [2024-11-17 13:23:12.498348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.401 [2024-11-17 13:23:12.498355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.401 [2024-11-17 13:23:12.498361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.401 [2024-11-17 13:23:12.498739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.401 [2024-11-17 13:23:12.569039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 [2024-11-17 13:23:13.309037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.337 malloc0 00:13:24.337 [2024-11-17 13:23:13.342575] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:24.337 [2024-11-17 13:23:13.342830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:24.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72190 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72190 /var/tmp/bdevperf.sock 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72190 ']' 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.337 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.337 [2024-11-17 13:23:13.429532] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:24.337 [2024-11-17 13:23:13.429626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72190 ] 00:13:24.596 [2024-11-17 13:23:13.576344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.596 [2024-11-17 13:23:13.627263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.597 [2024-11-17 13:23:13.679197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.597 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.597 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:24.597 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4dOTcdtkO4 00:13:24.855 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:25.114 [2024-11-17 13:23:14.194613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:25.114 nvme0n1 00:13:25.114 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:25.373 Running I/O for 1 seconds... 00:13:26.309 4736.00 IOPS, 18.50 MiB/s 00:13:26.309 Latency(us) 00:13:26.309 [2024-11-17T13:23:15.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.309 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.309 Verification LBA range: start 0x0 length 0x2000 00:13:26.309 nvme0n1 : 1.01 4799.65 18.75 0.00 0.00 26455.87 4885.41 23592.96 00:13:26.309 [2024-11-17T13:23:15.533Z] =================================================================================================================== 00:13:26.309 [2024-11-17T13:23:15.533Z] Total : 4799.65 18.75 0.00 0.00 26455.87 4885.41 23592.96 00:13:26.309 { 00:13:26.309 "results": [ 00:13:26.309 { 00:13:26.309 "job": "nvme0n1", 00:13:26.309 "core_mask": "0x2", 00:13:26.309 "workload": "verify", 00:13:26.309 "status": "finished", 00:13:26.309 "verify_range": { 00:13:26.309 "start": 0, 00:13:26.309 "length": 8192 00:13:26.309 }, 00:13:26.309 "queue_depth": 128, 00:13:26.309 "io_size": 4096, 00:13:26.309 "runtime": 1.013407, 00:13:26.309 "iops": 4799.6510779972905, 00:13:26.309 "mibps": 18.748637023426916, 00:13:26.309 "io_failed": 0, 00:13:26.309 "io_timeout": 0, 00:13:26.309 "avg_latency_us": 26455.86985645933, 00:13:26.309 "min_latency_us": 4885.410909090909, 00:13:26.309 "max_latency_us": 23592.96 00:13:26.309 } 00:13:26.309 ], 00:13:26.309 "core_count": 1 00:13:26.309 } 00:13:26.309 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:26.309 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.309 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.569 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:26.569 "subsystems": [ 00:13:26.569 { 00:13:26.569 "subsystem": "keyring", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "keyring_file_add_key", 00:13:26.569 "params": { 00:13:26.569 "name": "key0", 00:13:26.569 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:26.569 } 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "iobuf", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "iobuf_set_options", 00:13:26.569 "params": { 00:13:26.569 "small_pool_count": 8192, 00:13:26.569 "large_pool_count": 1024, 00:13:26.569 "small_bufsize": 8192, 00:13:26.569 "large_bufsize": 135168, 00:13:26.569 "enable_numa": false 00:13:26.569 } 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "sock", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "sock_set_default_impl", 00:13:26.569 "params": { 00:13:26.569 "impl_name": "uring" 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "sock_impl_set_options", 00:13:26.569 "params": { 00:13:26.569 "impl_name": "ssl", 00:13:26.569 "recv_buf_size": 4096, 00:13:26.569 "send_buf_size": 4096, 00:13:26.569 "enable_recv_pipe": true, 00:13:26.569 "enable_quickack": false, 00:13:26.569 "enable_placement_id": 0, 00:13:26.569 "enable_zerocopy_send_server": true, 00:13:26.569 "enable_zerocopy_send_client": false, 00:13:26.569 "zerocopy_threshold": 0, 00:13:26.569 "tls_version": 0, 00:13:26.569 "enable_ktls": false 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "sock_impl_set_options", 00:13:26.569 "params": { 00:13:26.569 "impl_name": "posix", 00:13:26.569 "recv_buf_size": 2097152, 00:13:26.569 "send_buf_size": 2097152, 00:13:26.569 "enable_recv_pipe": true, 00:13:26.569 "enable_quickack": false, 00:13:26.569 "enable_placement_id": 0, 00:13:26.569 "enable_zerocopy_send_server": true, 00:13:26.569 "enable_zerocopy_send_client": false, 00:13:26.569 "zerocopy_threshold": 0, 00:13:26.569 "tls_version": 0, 00:13:26.569 "enable_ktls": false 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "sock_impl_set_options", 00:13:26.569 "params": { 00:13:26.569 "impl_name": "uring", 00:13:26.569 "recv_buf_size": 2097152, 00:13:26.569 "send_buf_size": 2097152, 00:13:26.569 "enable_recv_pipe": true, 00:13:26.569 "enable_quickack": false, 00:13:26.569 "enable_placement_id": 0, 00:13:26.569 "enable_zerocopy_send_server": false, 00:13:26.569 "enable_zerocopy_send_client": false, 00:13:26.569 "zerocopy_threshold": 0, 00:13:26.569 "tls_version": 0, 00:13:26.569 "enable_ktls": false 00:13:26.569 } 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "vmd", 00:13:26.569 "config": [] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "accel", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "accel_set_options", 00:13:26.569 "params": { 00:13:26.569 "small_cache_size": 128, 00:13:26.569 "large_cache_size": 16, 00:13:26.569 "task_count": 2048, 00:13:26.569 "sequence_count": 2048, 00:13:26.569 "buf_count": 2048 00:13:26.569 } 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "bdev", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "bdev_set_options", 00:13:26.569 "params": { 00:13:26.569 "bdev_io_pool_size": 65535, 00:13:26.569 "bdev_io_cache_size": 256, 00:13:26.569 "bdev_auto_examine": true, 00:13:26.569 "iobuf_small_cache_size": 128, 00:13:26.569 "iobuf_large_cache_size": 16 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_raid_set_options", 00:13:26.569 "params": { 00:13:26.569 "process_window_size_kb": 1024, 00:13:26.569 "process_max_bandwidth_mb_sec": 0 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_iscsi_set_options", 00:13:26.569 "params": { 00:13:26.569 "timeout_sec": 30 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_nvme_set_options", 00:13:26.569 "params": { 00:13:26.569 "action_on_timeout": "none", 00:13:26.569 "timeout_us": 0, 00:13:26.569 "timeout_admin_us": 0, 00:13:26.569 "keep_alive_timeout_ms": 10000, 00:13:26.569 "arbitration_burst": 0, 00:13:26.569 "low_priority_weight": 0, 00:13:26.569 "medium_priority_weight": 0, 00:13:26.569 "high_priority_weight": 0, 00:13:26.569 "nvme_adminq_poll_period_us": 10000, 00:13:26.569 "nvme_ioq_poll_period_us": 0, 00:13:26.569 "io_queue_requests": 0, 00:13:26.569 "delay_cmd_submit": true, 00:13:26.569 "transport_retry_count": 4, 00:13:26.569 "bdev_retry_count": 3, 00:13:26.569 "transport_ack_timeout": 0, 00:13:26.569 "ctrlr_loss_timeout_sec": 0, 00:13:26.569 "reconnect_delay_sec": 0, 00:13:26.569 "fast_io_fail_timeout_sec": 0, 00:13:26.569 "disable_auto_failback": false, 00:13:26.569 "generate_uuids": false, 00:13:26.569 "transport_tos": 0, 00:13:26.569 "nvme_error_stat": false, 00:13:26.569 "rdma_srq_size": 0, 00:13:26.569 "io_path_stat": false, 00:13:26.569 "allow_accel_sequence": false, 00:13:26.569 "rdma_max_cq_size": 0, 00:13:26.569 "rdma_cm_event_timeout_ms": 0, 00:13:26.569 "dhchap_digests": [ 00:13:26.569 "sha256", 00:13:26.569 "sha384", 00:13:26.569 "sha512" 00:13:26.569 ], 00:13:26.569 "dhchap_dhgroups": [ 00:13:26.569 "null", 00:13:26.569 "ffdhe2048", 00:13:26.569 "ffdhe3072", 00:13:26.569 "ffdhe4096", 00:13:26.569 "ffdhe6144", 00:13:26.569 "ffdhe8192" 00:13:26.569 ] 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_nvme_set_hotplug", 00:13:26.569 "params": { 00:13:26.569 "period_us": 100000, 00:13:26.569 "enable": false 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_malloc_create", 00:13:26.569 "params": { 00:13:26.569 "name": "malloc0", 00:13:26.569 "num_blocks": 8192, 00:13:26.569 "block_size": 4096, 00:13:26.569 "physical_block_size": 4096, 00:13:26.569 "uuid": "faa67904-6bfa-4777-bd3d-1f2b617c860b", 00:13:26.569 "optimal_io_boundary": 0, 00:13:26.569 "md_size": 0, 00:13:26.569 "dif_type": 0, 00:13:26.569 "dif_is_head_of_md": false, 00:13:26.569 "dif_pi_format": 0 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "bdev_wait_for_examine" 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "nbd", 00:13:26.569 "config": [] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "scheduler", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "framework_set_scheduler", 00:13:26.569 "params": { 00:13:26.569 "name": "static" 00:13:26.569 } 00:13:26.569 } 00:13:26.569 ] 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "subsystem": "nvmf", 00:13:26.569 "config": [ 00:13:26.569 { 00:13:26.569 "method": "nvmf_set_config", 00:13:26.569 "params": { 00:13:26.569 "discovery_filter": "match_any", 00:13:26.569 "admin_cmd_passthru": { 00:13:26.569 "identify_ctrlr": false 00:13:26.569 }, 00:13:26.569 "dhchap_digests": [ 00:13:26.569 "sha256", 00:13:26.569 "sha384", 00:13:26.569 "sha512" 00:13:26.569 ], 00:13:26.569 "dhchap_dhgroups": [ 00:13:26.569 "null", 00:13:26.569 "ffdhe2048", 00:13:26.569 "ffdhe3072", 00:13:26.569 "ffdhe4096", 00:13:26.569 "ffdhe6144", 00:13:26.569 "ffdhe8192" 00:13:26.569 ] 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "nvmf_set_max_subsystems", 00:13:26.569 "params": { 00:13:26.569 "max_subsystems": 1024 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "nvmf_set_crdt", 00:13:26.569 "params": { 00:13:26.569 "crdt1": 0, 00:13:26.569 "crdt2": 0, 00:13:26.569 "crdt3": 0 00:13:26.569 } 00:13:26.569 }, 00:13:26.569 { 00:13:26.569 "method": "nvmf_create_transport", 00:13:26.569 "params": { 00:13:26.569 "trtype": "TCP", 00:13:26.569 "max_queue_depth": 128, 00:13:26.569 "max_io_qpairs_per_ctrlr": 127, 00:13:26.569 "in_capsule_data_size": 4096, 00:13:26.569 "max_io_size": 131072, 00:13:26.569 "io_unit_size": 131072, 00:13:26.569 "max_aq_depth": 128, 00:13:26.569 "num_shared_buffers": 511, 00:13:26.570 "buf_cache_size": 4294967295, 00:13:26.570 "dif_insert_or_strip": false, 00:13:26.570 "zcopy": false, 00:13:26.570 "c2h_success": false, 00:13:26.570 "sock_priority": 0, 00:13:26.570 "abort_timeout_sec": 1, 00:13:26.570 "ack_timeout": 0, 00:13:26.570 "data_wr_pool_size": 0 00:13:26.570 } 00:13:26.570 }, 00:13:26.570 { 00:13:26.570 "method": "nvmf_create_subsystem", 00:13:26.570 "params": { 00:13:26.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.570 "allow_any_host": false, 00:13:26.570 "serial_number": "00000000000000000000", 00:13:26.570 "model_number": "SPDK bdev Controller", 00:13:26.570 "max_namespaces": 32, 00:13:26.570 "min_cntlid": 1, 00:13:26.570 "max_cntlid": 65519, 00:13:26.570 "ana_reporting": false 00:13:26.570 } 00:13:26.570 }, 00:13:26.570 { 00:13:26.570 "method": "nvmf_subsystem_add_host", 00:13:26.570 "params": { 00:13:26.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.570 "host": "nqn.2016-06.io.spdk:host1", 00:13:26.570 "psk": "key0" 00:13:26.570 } 00:13:26.570 }, 00:13:26.570 { 00:13:26.570 "method": "nvmf_subsystem_add_ns", 00:13:26.570 "params": { 00:13:26.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.570 "namespace": { 00:13:26.570 "nsid": 1, 00:13:26.570 "bdev_name": "malloc0", 00:13:26.570 "nguid": "FAA679046BFA4777BD3D1F2B617C860B", 00:13:26.570 "uuid": "faa67904-6bfa-4777-bd3d-1f2b617c860b", 00:13:26.570 "no_auto_visible": false 00:13:26.570 } 00:13:26.570 } 00:13:26.570 }, 00:13:26.570 { 00:13:26.570 "method": "nvmf_subsystem_add_listener", 00:13:26.570 "params": { 00:13:26.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.570 "listen_address": { 00:13:26.570 "trtype": "TCP", 00:13:26.570 "adrfam": "IPv4", 00:13:26.570 "traddr": "10.0.0.3", 00:13:26.570 "trsvcid": "4420" 00:13:26.570 }, 00:13:26.570 "secure_channel": false, 00:13:26.570 "sock_impl": "ssl" 00:13:26.570 } 00:13:26.570 } 00:13:26.570 ] 00:13:26.570 } 00:13:26.570 ] 00:13:26.570 }' 00:13:26.570 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:26.830 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:26.830 "subsystems": [ 00:13:26.830 { 00:13:26.830 "subsystem": "keyring", 00:13:26.830 "config": [ 00:13:26.830 { 00:13:26.830 "method": "keyring_file_add_key", 00:13:26.830 "params": { 00:13:26.830 "name": "key0", 00:13:26.830 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:26.830 } 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "iobuf", 00:13:26.830 "config": [ 00:13:26.830 { 00:13:26.830 "method": "iobuf_set_options", 00:13:26.830 "params": { 00:13:26.830 "small_pool_count": 8192, 00:13:26.830 "large_pool_count": 1024, 00:13:26.830 "small_bufsize": 8192, 00:13:26.830 "large_bufsize": 135168, 00:13:26.830 "enable_numa": false 00:13:26.830 } 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "sock", 00:13:26.830 "config": [ 00:13:26.830 { 00:13:26.830 "method": "sock_set_default_impl", 00:13:26.830 "params": { 00:13:26.830 "impl_name": "uring" 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "sock_impl_set_options", 00:13:26.830 "params": { 00:13:26.830 "impl_name": "ssl", 00:13:26.830 "recv_buf_size": 4096, 00:13:26.830 "send_buf_size": 4096, 00:13:26.830 "enable_recv_pipe": true, 00:13:26.830 "enable_quickack": false, 00:13:26.830 "enable_placement_id": 0, 00:13:26.830 "enable_zerocopy_send_server": true, 00:13:26.830 "enable_zerocopy_send_client": false, 00:13:26.830 "zerocopy_threshold": 0, 00:13:26.830 "tls_version": 0, 00:13:26.830 "enable_ktls": false 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "sock_impl_set_options", 00:13:26.830 "params": { 00:13:26.830 "impl_name": "posix", 00:13:26.830 "recv_buf_size": 2097152, 00:13:26.830 "send_buf_size": 2097152, 00:13:26.830 "enable_recv_pipe": true, 00:13:26.830 "enable_quickack": false, 00:13:26.830 "enable_placement_id": 0, 00:13:26.830 "enable_zerocopy_send_server": true, 00:13:26.830 "enable_zerocopy_send_client": false, 00:13:26.830 "zerocopy_threshold": 0, 00:13:26.830 "tls_version": 0, 00:13:26.830 "enable_ktls": false 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "sock_impl_set_options", 00:13:26.830 "params": { 00:13:26.830 "impl_name": "uring", 00:13:26.830 "recv_buf_size": 2097152, 00:13:26.830 "send_buf_size": 2097152, 00:13:26.830 "enable_recv_pipe": true, 00:13:26.830 "enable_quickack": false, 00:13:26.830 "enable_placement_id": 0, 00:13:26.830 "enable_zerocopy_send_server": false, 00:13:26.830 "enable_zerocopy_send_client": false, 00:13:26.830 "zerocopy_threshold": 0, 00:13:26.830 "tls_version": 0, 00:13:26.830 "enable_ktls": false 00:13:26.830 } 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "vmd", 00:13:26.830 "config": [] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "accel", 00:13:26.830 "config": [ 00:13:26.830 { 00:13:26.830 "method": "accel_set_options", 00:13:26.830 "params": { 00:13:26.830 "small_cache_size": 128, 00:13:26.830 "large_cache_size": 16, 00:13:26.830 "task_count": 2048, 00:13:26.830 "sequence_count": 2048, 00:13:26.830 "buf_count": 2048 00:13:26.830 } 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "bdev", 00:13:26.830 "config": [ 00:13:26.830 { 00:13:26.830 "method": "bdev_set_options", 00:13:26.830 "params": { 00:13:26.830 "bdev_io_pool_size": 65535, 00:13:26.830 "bdev_io_cache_size": 256, 00:13:26.830 "bdev_auto_examine": true, 00:13:26.830 "iobuf_small_cache_size": 128, 00:13:26.830 "iobuf_large_cache_size": 16 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_raid_set_options", 00:13:26.830 "params": { 00:13:26.830 "process_window_size_kb": 1024, 00:13:26.830 "process_max_bandwidth_mb_sec": 0 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_iscsi_set_options", 00:13:26.830 "params": { 00:13:26.830 "timeout_sec": 30 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_nvme_set_options", 00:13:26.830 "params": { 00:13:26.830 "action_on_timeout": "none", 00:13:26.830 "timeout_us": 0, 00:13:26.830 "timeout_admin_us": 0, 00:13:26.830 "keep_alive_timeout_ms": 10000, 00:13:26.830 "arbitration_burst": 0, 00:13:26.830 "low_priority_weight": 0, 00:13:26.830 "medium_priority_weight": 0, 00:13:26.830 "high_priority_weight": 0, 00:13:26.830 "nvme_adminq_poll_period_us": 10000, 00:13:26.830 "nvme_ioq_poll_period_us": 0, 00:13:26.830 "io_queue_requests": 512, 00:13:26.830 "delay_cmd_submit": true, 00:13:26.830 "transport_retry_count": 4, 00:13:26.830 "bdev_retry_count": 3, 00:13:26.830 "transport_ack_timeout": 0, 00:13:26.830 "ctrlr_loss_timeout_sec": 0, 00:13:26.830 "reconnect_delay_sec": 0, 00:13:26.830 "fast_io_fail_timeout_sec": 0, 00:13:26.830 "disable_auto_failback": false, 00:13:26.830 "generate_uuids": false, 00:13:26.830 "transport_tos": 0, 00:13:26.830 "nvme_error_stat": false, 00:13:26.830 "rdma_srq_size": 0, 00:13:26.830 "io_path_stat": false, 00:13:26.830 "allow_accel_sequence": false, 00:13:26.830 "rdma_max_cq_size": 0, 00:13:26.830 "rdma_cm_event_timeout_ms": 0, 00:13:26.830 "dhchap_digests": [ 00:13:26.830 "sha256", 00:13:26.830 "sha384", 00:13:26.830 "sha512" 00:13:26.830 ], 00:13:26.830 "dhchap_dhgroups": [ 00:13:26.830 "null", 00:13:26.830 "ffdhe2048", 00:13:26.830 "ffdhe3072", 00:13:26.830 "ffdhe4096", 00:13:26.830 "ffdhe6144", 00:13:26.830 "ffdhe8192" 00:13:26.830 ] 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_nvme_attach_controller", 00:13:26.830 "params": { 00:13:26.830 "name": "nvme0", 00:13:26.830 "trtype": "TCP", 00:13:26.830 "adrfam": "IPv4", 00:13:26.830 "traddr": "10.0.0.3", 00:13:26.830 "trsvcid": "4420", 00:13:26.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.830 "prchk_reftag": false, 00:13:26.830 "prchk_guard": false, 00:13:26.830 "ctrlr_loss_timeout_sec": 0, 00:13:26.830 "reconnect_delay_sec": 0, 00:13:26.830 "fast_io_fail_timeout_sec": 0, 00:13:26.830 "psk": "key0", 00:13:26.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.830 "hdgst": false, 00:13:26.830 "ddgst": false, 00:13:26.830 "multipath": "multipath" 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_nvme_set_hotplug", 00:13:26.830 "params": { 00:13:26.830 "period_us": 100000, 00:13:26.830 "enable": false 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_enable_histogram", 00:13:26.830 "params": { 00:13:26.830 "name": "nvme0n1", 00:13:26.830 "enable": true 00:13:26.830 } 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "method": "bdev_wait_for_examine" 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }, 00:13:26.830 { 00:13:26.830 "subsystem": "nbd", 00:13:26.830 "config": [] 00:13:26.830 } 00:13:26.830 ] 00:13:26.830 }' 00:13:26.830 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72190 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72190 ']' 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72190 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72190 00:13:26.831 killing process with pid 72190 00:13:26.831 Received shutdown signal, test time was about 1.000000 seconds 00:13:26.831 00:13:26.831 Latency(us) 00:13:26.831 [2024-11-17T13:23:16.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.831 [2024-11-17T13:23:16.055Z] =================================================================================================================== 00:13:26.831 [2024-11-17T13:23:16.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72190' 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72190 00:13:26.831 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72190 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72158 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72158 ']' 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72158 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72158 00:13:27.090 killing process with pid 72158 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72158' 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72158 00:13:27.090 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72158 00:13:27.349 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:13:27.349 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.349 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.349 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:13:27.349 "subsystems": [ 00:13:27.349 { 00:13:27.349 "subsystem": "keyring", 00:13:27.349 "config": [ 00:13:27.349 { 00:13:27.349 "method": "keyring_file_add_key", 00:13:27.349 "params": { 00:13:27.349 "name": "key0", 00:13:27.349 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:27.349 } 00:13:27.349 } 00:13:27.349 ] 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "subsystem": "iobuf", 00:13:27.349 "config": [ 00:13:27.349 { 00:13:27.349 "method": "iobuf_set_options", 00:13:27.349 "params": { 00:13:27.349 "small_pool_count": 8192, 00:13:27.349 "large_pool_count": 1024, 00:13:27.349 "small_bufsize": 8192, 00:13:27.349 "large_bufsize": 135168, 00:13:27.349 "enable_numa": false 00:13:27.349 } 00:13:27.349 } 00:13:27.349 ] 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "subsystem": "sock", 00:13:27.349 "config": [ 00:13:27.349 { 00:13:27.349 "method": "sock_set_default_impl", 00:13:27.349 "params": { 00:13:27.349 "impl_name": "uring" 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "sock_impl_set_options", 00:13:27.349 "params": { 00:13:27.349 "impl_name": "ssl", 00:13:27.349 "recv_buf_size": 4096, 00:13:27.349 "send_buf_size": 4096, 00:13:27.349 "enable_recv_pipe": true, 00:13:27.349 "enable_quickack": false, 00:13:27.349 "enable_placement_id": 0, 00:13:27.349 "enable_zerocopy_send_server": true, 00:13:27.349 "enable_zerocopy_send_client": false, 00:13:27.349 "zerocopy_threshold": 0, 00:13:27.349 "tls_version": 0, 00:13:27.349 "enable_ktls": false 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "sock_impl_set_options", 00:13:27.349 "params": { 00:13:27.349 "impl_name": "posix", 00:13:27.349 "recv_buf_size": 2097152, 00:13:27.349 "send_buf_size": 2097152, 00:13:27.349 "enable_recv_pipe": true, 00:13:27.349 "enable_quickack": false, 00:13:27.349 "enable_placement_id": 0, 00:13:27.349 "enable_zerocopy_send_server": true, 00:13:27.349 "enable_zerocopy_send_client": false, 00:13:27.349 "zerocopy_threshold": 0, 00:13:27.349 "tls_version": 0, 00:13:27.349 "enable_ktls": false 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "sock_impl_set_options", 00:13:27.349 "params": { 00:13:27.349 "impl_name": "uring", 00:13:27.349 "recv_buf_size": 2097152, 00:13:27.349 "send_buf_size": 2097152, 00:13:27.349 "enable_recv_pipe": true, 00:13:27.349 "enable_quickack": false, 00:13:27.349 "enable_placement_id": 0, 00:13:27.349 "enable_zerocopy_send_server": false, 00:13:27.349 "enable_zerocopy_send_client": false, 00:13:27.349 "zerocopy_threshold": 0, 00:13:27.349 "tls_version": 0, 00:13:27.349 "enable_ktls": false 00:13:27.349 } 00:13:27.349 } 00:13:27.349 ] 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "subsystem": "vmd", 00:13:27.349 "config": [] 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "subsystem": "accel", 00:13:27.349 "config": [ 00:13:27.349 { 00:13:27.349 "method": "accel_set_options", 00:13:27.349 "params": { 00:13:27.349 "small_cache_size": 128, 00:13:27.349 "large_cache_size": 16, 00:13:27.349 "task_count": 2048, 00:13:27.349 "sequence_count": 2048, 00:13:27.349 "buf_count": 2048 00:13:27.349 } 00:13:27.349 } 00:13:27.349 ] 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "subsystem": "bdev", 00:13:27.349 "config": [ 00:13:27.349 { 00:13:27.349 "method": "bdev_set_options", 00:13:27.349 "params": { 00:13:27.349 "bdev_io_pool_size": 65535, 00:13:27.349 "bdev_io_cache_size": 256, 00:13:27.349 "bdev_auto_examine": true, 00:13:27.349 "iobuf_small_cache_size": 128, 00:13:27.349 "iobuf_large_cache_size": 16 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "bdev_raid_set_options", 00:13:27.349 "params": { 00:13:27.349 "process_window_size_kb": 1024, 00:13:27.349 "process_max_bandwidth_mb_sec": 0 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "bdev_iscsi_set_options", 00:13:27.349 "params": { 00:13:27.349 "timeout_sec": 30 00:13:27.349 } 00:13:27.349 }, 00:13:27.349 { 00:13:27.349 "method": "bdev_nvme_set_options", 00:13:27.349 "params": { 00:13:27.349 "action_on_timeout": "none", 00:13:27.349 "timeout_us": 0, 00:13:27.349 "timeout_admin_us": 0, 00:13:27.349 "keep_alive_timeout_ms": 10000, 00:13:27.349 "arbitration_burst": 0, 00:13:27.349 "low_priority_weight": 0, 00:13:27.349 "medium_priority_weight": 0, 00:13:27.349 "high_priority_weight": 0, 00:13:27.349 "nvme_adminq_poll_period_us": 10000, 00:13:27.349 "nvme_ioq_poll_period_us": 0, 00:13:27.349 "io_queue_requests": 0, 00:13:27.349 "delay_cmd_submit": true, 00:13:27.349 "transport_retry_count": 4, 00:13:27.349 "bdev_retry_count": 3, 00:13:27.349 "transport_ack_timeout": 0, 00:13:27.349 "ctrlr_loss_timeout_sec": 0, 00:13:27.349 "reconnect_delay_sec": 0, 00:13:27.349 "fast_io_fail_timeout_sec": 0, 00:13:27.349 "disable_auto_failback": false, 00:13:27.349 "generate_uuids": false, 00:13:27.349 "transport_tos": 0, 00:13:27.349 "nvme_error_stat": false, 00:13:27.349 "rdma_srq_size": 0, 00:13:27.349 "io_path_stat": false, 00:13:27.349 "allow_accel_sequence": false, 00:13:27.349 "rdma_max_cq_size": 0, 00:13:27.349 "rdma_cm_event_timeout_ms": 0, 00:13:27.349 "dhchap_digests": [ 00:13:27.349 "sha256", 00:13:27.349 "sha384", 00:13:27.349 "sha512" 00:13:27.349 ], 00:13:27.349 "dhchap_dhgroups": [ 00:13:27.349 "null", 00:13:27.349 "ffdhe2048", 00:13:27.349 "ffdhe3072", 00:13:27.350 "ffdhe4096", 00:13:27.350 "ffdhe6144", 00:13:27.350 "ffdhe8192" 00:13:27.350 ] 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "bdev_nvme_set_hotplug", 00:13:27.350 "params": { 00:13:27.350 "period_us": 100000, 00:13:27.350 "enable": false 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "bdev_malloc_create", 00:13:27.350 "params": { 00:13:27.350 "name": "malloc0", 00:13:27.350 "num_blocks": 8192, 00:13:27.350 "block_size": 4096, 00:13:27.350 "physical_block_size": 4096, 00:13:27.350 "uuid": "faa67904-6bfa-4777-bd3d-1f2b617c860b", 00:13:27.350 "optimal_io_boundary": 0, 00:13:27.350 "md_size": 0, 00:13:27.350 "dif_type": 0, 00:13:27.350 "dif_is_head_of_md": false, 00:13:27.350 "dif_pi_format": 0 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "bdev_wait_for_examine" 00:13:27.350 } 00:13:27.350 ] 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "subsystem": "nbd", 00:13:27.350 "config": [] 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "subsystem": "scheduler", 00:13:27.350 "config": [ 00:13:27.350 { 00:13:27.350 "method": "framework_set_scheduler", 00:13:27.350 "params": { 00:13:27.350 "name": "static" 00:13:27.350 } 00:13:27.350 } 00:13:27.350 ] 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "subsystem": "nvmf", 00:13:27.350 "config": [ 00:13:27.350 { 00:13:27.350 "method": "nvmf_set_config", 00:13:27.350 "params": { 00:13:27.350 "discovery_filter": "match_any", 00:13:27.350 "admin_cmd_passthru": { 00:13:27.350 "identify_ctrlr": false 00:13:27.350 }, 00:13:27.350 "dhchap_digests": [ 00:13:27.350 "sha256", 00:13:27.350 "sha384", 00:13:27.350 "sha512" 00:13:27.350 ], 00:13:27.350 "dhchap_dhgroups": [ 00:13:27.350 "null", 00:13:27.350 "ffdhe2048", 00:13:27.350 "ffdhe3072", 00:13:27.350 "ffdhe4096", 00:13:27.350 "ffdhe6144", 00:13:27.350 "ffdhe8192" 00:13:27.350 ] 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_set_max_subsystems", 00:13:27.350 "params": { 00:13:27.350 "max_subsystems": 1024 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_set_crdt", 00:13:27.350 "params": { 00:13:27.350 "crdt1": 0, 00:13:27.350 "crdt2": 0, 00:13:27.350 "crdt3": 0 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_create_transport", 00:13:27.350 "params": { 00:13:27.350 "trtype": "TCP", 00:13:27.350 "max_queue_depth": 128, 00:13:27.350 "max_io_qpairs_per_ctrlr": 127, 00:13:27.350 "in_capsule_data_size": 4096, 00:13:27.350 "max_io_size": 131072, 00:13:27.350 "io_unit_size": 131072, 00:13:27.350 "max_aq_depth": 128, 00:13:27.350 "num_shared_buffers": 511, 00:13:27.350 "buf_cache_size": 4294967295, 00:13:27.350 "dif_insert_or_strip": false, 00:13:27.350 "zcopy": false, 00:13:27.350 "c2h_success": false, 00:13:27.350 "sock_priority": 0, 00:13:27.350 "abort_timeout_sec": 1, 00:13:27.350 "ack_timeout": 0, 00:13:27.350 "data_wr_pool_size": 0 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_create_subsystem", 00:13:27.350 "params": { 00:13:27.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.350 "allow_any_host": false, 00:13:27.350 "serial_number": "00000000000000000000", 00:13:27.350 "model_number": "SPDK bdev Controller", 00:13:27.350 "max_namespaces": 32, 00:13:27.350 "min_cntlid": 1, 00:13:27.350 "max_cntlid": 65519, 00:13:27.350 "ana_reporting": false 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_subsystem_add_host", 00:13:27.350 "params": { 00:13:27.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.350 "host": "nqn.2016-06.io.spdk:host1", 00:13:27.350 "psk": "key0" 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_subsystem_add_ns", 00:13:27.350 "params": { 00:13:27.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.350 "namespace": { 00:13:27.350 "nsid": 1, 00:13:27.350 "bdev_name": "malloc0", 00:13:27.350 "nguid": "FAA679046BFA4777BD3D1F2B617C860B", 00:13:27.350 "uuid": "faa67904-6bfa-4777-bd3d-1f2b617c860b", 00:13:27.350 "no_auto_visible": false 00:13:27.350 } 00:13:27.350 } 00:13:27.350 }, 00:13:27.350 { 00:13:27.350 "method": "nvmf_subsystem_add_listener", 00:13:27.350 "params": { 00:13:27.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.350 "listen_address": { 00:13:27.350 "trtype": "TCP", 00:13:27.350 "adrfam": "IPv4", 00:13:27.350 "traddr": "10.0.0.3", 00:13:27.350 "trsvcid": "4420" 00:13:27.350 }, 00:13:27.350 "secure_channel": false, 00:13:27.350 "sock_impl": "ssl" 00:13:27.350 } 00:13:27.350 } 00:13:27.350 ] 00:13:27.350 } 00:13:27.350 ] 00:13:27.350 }' 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72243 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72243 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72243 ']' 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.350 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.350 [2024-11-17 13:23:16.436529] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:27.350 [2024-11-17 13:23:16.436632] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.609 [2024-11-17 13:23:16.582014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.609 [2024-11-17 13:23:16.621380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.609 [2024-11-17 13:23:16.621445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.609 [2024-11-17 13:23:16.621455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.609 [2024-11-17 13:23:16.621462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.609 [2024-11-17 13:23:16.621469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.609 [2024-11-17 13:23:16.621899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.609 [2024-11-17 13:23:16.805077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.868 [2024-11-17 13:23:16.895810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.868 [2024-11-17 13:23:16.927800] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:27.868 [2024-11-17 13:23:16.928031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72275 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72275 /var/tmp/bdevperf.sock 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72275 ']' 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.435 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:13:28.435 "subsystems": [ 00:13:28.435 { 00:13:28.435 "subsystem": "keyring", 00:13:28.435 "config": [ 00:13:28.435 { 00:13:28.435 "method": "keyring_file_add_key", 00:13:28.435 "params": { 00:13:28.435 "name": "key0", 00:13:28.435 "path": "/tmp/tmp.4dOTcdtkO4" 00:13:28.435 } 00:13:28.435 } 00:13:28.435 ] 00:13:28.435 }, 00:13:28.435 { 00:13:28.435 "subsystem": "iobuf", 00:13:28.435 "config": [ 00:13:28.435 { 00:13:28.435 "method": "iobuf_set_options", 00:13:28.435 "params": { 00:13:28.435 "small_pool_count": 8192, 00:13:28.435 "large_pool_count": 1024, 00:13:28.436 "small_bufsize": 8192, 00:13:28.436 "large_bufsize": 135168, 00:13:28.436 "enable_numa": false 00:13:28.436 } 00:13:28.436 } 00:13:28.436 ] 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "subsystem": "sock", 00:13:28.436 "config": [ 00:13:28.436 { 00:13:28.436 "method": "sock_set_default_impl", 00:13:28.436 "params": { 00:13:28.436 "impl_name": "uring" 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "sock_impl_set_options", 00:13:28.436 "params": { 00:13:28.436 "impl_name": "ssl", 00:13:28.436 "recv_buf_size": 4096, 00:13:28.436 "send_buf_size": 4096, 00:13:28.436 "enable_recv_pipe": true, 00:13:28.436 "enable_quickack": false, 00:13:28.436 "enable_placement_id": 0, 00:13:28.436 "enable_zerocopy_send_server": true, 00:13:28.436 "enable_zerocopy_send_client": false, 00:13:28.436 "zerocopy_threshold": 0, 00:13:28.436 "tls_version": 0, 00:13:28.436 "enable_ktls": false 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "sock_impl_set_options", 00:13:28.436 "params": { 00:13:28.436 "impl_name": "posix", 00:13:28.436 "recv_buf_size": 2097152, 00:13:28.436 "send_buf_size": 2097152, 00:13:28.436 "enable_recv_pipe": true, 00:13:28.436 "enable_quickack": false, 00:13:28.436 "enable_placement_id": 0, 00:13:28.436 "enable_zerocopy_send_server": true, 00:13:28.436 "enable_zerocopy_send_client": false, 00:13:28.436 "zerocopy_threshold": 0, 00:13:28.436 "tls_version": 0, 00:13:28.436 "enable_ktls": false 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "sock_impl_set_options", 00:13:28.436 "params": { 00:13:28.436 "impl_name": "uring", 00:13:28.436 "recv_buf_size": 2097152, 00:13:28.436 "send_buf_size": 2097152, 00:13:28.436 "enable_recv_pipe": true, 00:13:28.436 "enable_quickack": false, 00:13:28.436 "enable_placement_id": 0, 00:13:28.436 "enable_zerocopy_send_server": false, 00:13:28.436 "enable_zerocopy_send_client": false, 00:13:28.436 "zerocopy_threshold": 0, 00:13:28.436 "tls_version": 0, 00:13:28.436 "enable_ktls": false 00:13:28.436 } 00:13:28.436 } 00:13:28.436 ] 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "subsystem": "vmd", 00:13:28.436 "config": [] 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "subsystem": "accel", 00:13:28.436 "config": [ 00:13:28.436 { 00:13:28.436 "method": "accel_set_options", 00:13:28.436 "params": { 00:13:28.436 "small_cache_size": 128, 00:13:28.436 "large_cache_size": 16, 00:13:28.436 "task_count": 2048, 00:13:28.436 "sequence_count": 2048, 00:13:28.436 "buf_count": 2048 00:13:28.436 } 00:13:28.436 } 00:13:28.436 ] 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "subsystem": "bdev", 00:13:28.436 "config": [ 00:13:28.436 { 00:13:28.436 "method": "bdev_set_options", 00:13:28.436 "params": { 00:13:28.436 "bdev_io_pool_size": 65535, 00:13:28.436 "bdev_io_cache_size": 256, 00:13:28.436 "bdev_auto_examine": true, 00:13:28.436 "iobuf_small_cache_size": 128, 00:13:28.436 "iobuf_large_cache_size": 16 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "bdev_raid_set_options", 00:13:28.436 "params": { 00:13:28.436 "process_window_size_kb": 1024, 00:13:28.436 "process_max_bandwidth_mb_sec": 0 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "bdev_iscsi_set_options", 00:13:28.436 "params": { 00:13:28.436 "timeout_sec": 30 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "bdev_nvme_set_options", 00:13:28.436 "params": { 00:13:28.436 "action_on_timeout": "none", 00:13:28.436 "timeout_us": 0, 00:13:28.436 "timeout_admin_us": 0, 00:13:28.436 "keep_alive_timeout_ms": 10000, 00:13:28.436 "arbitration_burst": 0, 00:13:28.436 "low_priority_weight": 0, 00:13:28.436 "medium_priority_weight": 0, 00:13:28.436 "high_priority_weight": 0, 00:13:28.436 "nvme_adminq_poll_period_us": 10000, 00:13:28.436 "nvme_ioq_poll_period_us": 0, 00:13:28.436 "io_queue_requests": 512, 00:13:28.436 "delay_cmd_submit": true, 00:13:28.436 "transport_retry_count": 4, 00:13:28.436 "bdev_retry_count": 3, 00:13:28.436 "transport_ack_timeout": 0, 00:13:28.436 "ctrlr_loss_timeout_sec": 0, 00:13:28.436 "reconnect_delay_sec": 0, 00:13:28.436 "fast_io_fail_timeout_sec": 0, 00:13:28.436 "disable_auto_failback": false, 00:13:28.436 "generate_uuids": false, 00:13:28.436 "transport_tos": 0, 00:13:28.436 "nvme_error_stat": false, 00:13:28.436 "rdma_srq_size": 0, 00:13:28.436 "io_path_stat": false, 00:13:28.436 "allow_accel_sequence": false, 00:13:28.436 "rdma_max_cq_size": 0, 00:13:28.436 "rdma_cm_event_timeout_ms": 0, 00:13:28.436 "dhchap_digests": [ 00:13:28.436 "sha256", 00:13:28.436 "sha384", 00:13:28.436 "sha512" 00:13:28.436 ], 00:13:28.436 "dhchap_dhgroups": [ 00:13:28.436 "null", 00:13:28.436 "ffdhe2048", 00:13:28.436 "ffdhe3072", 00:13:28.436 "ffdhe4096", 00:13:28.436 "ffdhe6144", 00:13:28.436 "ffdhe8192" 00:13:28.436 ] 00:13:28.436 } 00:13:28.436 }, 00:13:28.436 { 00:13:28.436 "method": "bdev_nvme_attach_controller", 00:13:28.436 "params": { 00:13:28.436 "name": "nvme0", 00:13:28.436 "trtype": "TCP", 00:13:28.436 "adrfam": "IPv4", 00:13:28.436 "traddr": "10.0.0.3", 00:13:28.436 "trsvcid": "4420", 00:13:28.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.436 "prchk_reftag": false, 00:13:28.436 "prchk_guard": false, 00:13:28.437 "ctrlr_loss_timeout_sec": 0, 00:13:28.437 "reconnect_delay_sec": 0, 00:13:28.437 "fast_io_fail_timeout_sec": 0, 00:13:28.437 "psk": "key0", 00:13:28.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.437 "hdgst": false, 00:13:28.437 "ddgst": false, 00:13:28.437 "multipath": "multipath" 00:13:28.437 } 00:13:28.437 }, 00:13:28.437 { 00:13:28.437 "method": "bdev_nvme_set_hotplug", 00:13:28.437 "params": { 00:13:28.437 "period_us": 100000, 00:13:28.437 "enable": false 00:13:28.437 } 00:13:28.437 }, 00:13:28.437 { 00:13:28.437 "method": "bdev_enable_histogram", 00:13:28.437 "params": { 00:13:28.437 "name": "nvme0n1", 00:13:28.437 "enable": true 00:13:28.437 } 00:13:28.437 }, 00:13:28.437 { 00:13:28.437 "method": "bdev_wait_for_examine" 00:13:28.437 } 00:13:28.437 ] 00:13:28.437 }, 00:13:28.437 { 00:13:28.437 "subsystem": "nbd", 00:13:28.437 "config": [] 00:13:28.437 } 00:13:28.437 ] 00:13:28.437 }' 00:13:28.437 [2024-11-17 13:23:17.503204] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:28.437 [2024-11-17 13:23:17.503314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72275 ] 00:13:28.695 [2024-11-17 13:23:17.656465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.695 [2024-11-17 13:23:17.707850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.695 [2024-11-17 13:23:17.840274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.695 [2024-11-17 13:23:17.884038] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.262 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.262 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:29.262 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:29.262 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:13:29.521 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.521 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:29.780 Running I/O for 1 seconds... 00:13:30.715 4724.00 IOPS, 18.45 MiB/s 00:13:30.715 Latency(us) 00:13:30.715 [2024-11-17T13:23:19.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.715 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.715 Verification LBA range: start 0x0 length 0x2000 00:13:30.715 nvme0n1 : 1.01 4785.16 18.69 0.00 0.00 26536.54 4885.41 23473.80 00:13:30.715 [2024-11-17T13:23:19.939Z] =================================================================================================================== 00:13:30.716 [2024-11-17T13:23:19.940Z] Total : 4785.16 18.69 0.00 0.00 26536.54 4885.41 23473.80 00:13:30.716 { 00:13:30.716 "results": [ 00:13:30.716 { 00:13:30.716 "job": "nvme0n1", 00:13:30.716 "core_mask": "0x2", 00:13:30.716 "workload": "verify", 00:13:30.716 "status": "finished", 00:13:30.716 "verify_range": { 00:13:30.716 "start": 0, 00:13:30.716 "length": 8192 00:13:30.716 }, 00:13:30.716 "queue_depth": 128, 00:13:30.716 "io_size": 4096, 00:13:30.716 "runtime": 1.014177, 00:13:30.716 "iops": 4785.16077568314, 00:13:30.716 "mibps": 18.692034280012265, 00:13:30.716 "io_failed": 0, 00:13:30.716 "io_timeout": 0, 00:13:30.716 "avg_latency_us": 26536.540198939736, 00:13:30.716 "min_latency_us": 4885.410909090909, 00:13:30.716 "max_latency_us": 23473.803636363635 00:13:30.716 } 00:13:30.716 ], 00:13:30.716 "core_count": 1 00:13:30.716 } 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:30.716 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:30.716 nvmf_trace.0 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72275 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72275 ']' 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72275 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72275 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:30.974 killing process with pid 72275 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72275' 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72275 00:13:30.974 Received shutdown signal, test time was about 1.000000 seconds 00:13:30.974 00:13:30.974 Latency(us) 00:13:30.974 [2024-11-17T13:23:20.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.974 [2024-11-17T13:23:20.198Z] =================================================================================================================== 00:13:30.974 [2024-11-17T13:23:20.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.974 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72275 00:13:30.974 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:30.974 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.974 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.233 rmmod nvme_tcp 00:13:31.233 rmmod nvme_fabrics 00:13:31.233 rmmod nvme_keyring 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72243 ']' 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72243 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72243 ']' 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72243 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72243 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.233 killing process with pid 72243 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72243' 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72243 00:13:31.233 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72243 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:31.492 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EGJvGSpe2d /tmp/tmp.daCugYrxRZ /tmp/tmp.4dOTcdtkO4 00:13:31.751 00:13:31.751 real 1m22.878s 00:13:31.751 user 2m8.595s 00:13:31.751 sys 0m29.665s 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.751 ************************************ 00:13:31.751 END TEST nvmf_tls 00:13:31.751 ************************************ 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.751 ************************************ 00:13:31.751 START TEST nvmf_fips 00:13:31.751 ************************************ 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:31.751 * Looking for test storage... 00:13:31.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:13:31.751 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.012 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:32.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.012 --rc genhtml_branch_coverage=1 00:13:32.012 --rc genhtml_function_coverage=1 00:13:32.013 --rc genhtml_legend=1 00:13:32.013 --rc geninfo_all_blocks=1 00:13:32.013 --rc geninfo_unexecuted_blocks=1 00:13:32.013 00:13:32.013 ' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:32.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.013 --rc genhtml_branch_coverage=1 00:13:32.013 --rc genhtml_function_coverage=1 00:13:32.013 --rc genhtml_legend=1 00:13:32.013 --rc geninfo_all_blocks=1 00:13:32.013 --rc geninfo_unexecuted_blocks=1 00:13:32.013 00:13:32.013 ' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:32.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.013 --rc genhtml_branch_coverage=1 00:13:32.013 --rc genhtml_function_coverage=1 00:13:32.013 --rc genhtml_legend=1 00:13:32.013 --rc geninfo_all_blocks=1 00:13:32.013 --rc geninfo_unexecuted_blocks=1 00:13:32.013 00:13:32.013 ' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:32.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.013 --rc genhtml_branch_coverage=1 00:13:32.013 --rc genhtml_function_coverage=1 00:13:32.013 --rc genhtml_legend=1 00:13:32.013 --rc geninfo_all_blocks=1 00:13:32.013 --rc geninfo_unexecuted_blocks=1 00:13:32.013 00:13:32.013 ' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:13:32.013 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:13:32.014 Error setting digest 00:13:32.014 4002210C537F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:32.014 4002210C537F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:32.014 Cannot find device "nvmf_init_br" 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:32.014 Cannot find device "nvmf_init_br2" 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:32.014 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:32.273 Cannot find device "nvmf_tgt_br" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.273 Cannot find device "nvmf_tgt_br2" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:32.273 Cannot find device "nvmf_init_br" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:32.273 Cannot find device "nvmf_init_br2" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:32.273 Cannot find device "nvmf_tgt_br" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:32.273 Cannot find device "nvmf_tgt_br2" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:32.273 Cannot find device "nvmf_br" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:32.273 Cannot find device "nvmf_init_if" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:32.273 Cannot find device "nvmf_init_if2" 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.273 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.274 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:32.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:32.533 00:13:32.533 --- 10.0.0.3 ping statistics --- 00:13:32.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.533 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:32.533 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:32.533 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:13:32.533 00:13:32.533 --- 10.0.0.4 ping statistics --- 00:13:32.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.533 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:32.533 00:13:32.533 --- 10.0.0.1 ping statistics --- 00:13:32.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.533 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:32.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:32.533 00:13:32.533 --- 10.0.0.2 ping statistics --- 00:13:32.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.533 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72599 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72599 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72599 ']' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.533 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:32.533 [2024-11-17 13:23:21.744422] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:32.533 [2024-11-17 13:23:21.744512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.792 [2024-11-17 13:23:21.892807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.792 [2024-11-17 13:23:21.947213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.792 [2024-11-17 13:23:21.947277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.792 [2024-11-17 13:23:21.947291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.792 [2024-11-17 13:23:21.947302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.792 [2024-11-17 13:23:21.947311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.792 [2024-11-17 13:23:21.947753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.792 [2024-11-17 13:23:22.011040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.8BB 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.8BB 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.8BB 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.8BB 00:13:33.051 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.309 [2024-11-17 13:23:22.431086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.309 [2024-11-17 13:23:22.447051] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:33.309 [2024-11-17 13:23:22.447258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:33.309 malloc0 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72633 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72633 /var/tmp/bdevperf.sock 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72633 ']' 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.309 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:33.568 [2024-11-17 13:23:22.594540] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:33.568 [2024-11-17 13:23:22.594629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72633 ] 00:13:33.568 [2024-11-17 13:23:22.747836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.827 [2024-11-17 13:23:22.807080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.827 [2024-11-17 13:23:22.864177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.394 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.394 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:34.394 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.8BB 00:13:34.652 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:34.911 [2024-11-17 13:23:24.073498] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.169 TLSTESTn1 00:13:35.169 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:35.169 Running I/O for 10 seconds... 00:13:37.039 4604.00 IOPS, 17.98 MiB/s [2024-11-17T13:23:27.638Z] 4678.00 IOPS, 18.27 MiB/s [2024-11-17T13:23:28.573Z] 4701.00 IOPS, 18.36 MiB/s [2024-11-17T13:23:29.508Z] 4717.50 IOPS, 18.43 MiB/s [2024-11-17T13:23:30.443Z] 4725.20 IOPS, 18.46 MiB/s [2024-11-17T13:23:31.425Z] 4732.83 IOPS, 18.49 MiB/s [2024-11-17T13:23:32.361Z] 4733.29 IOPS, 18.49 MiB/s [2024-11-17T13:23:33.298Z] 4738.62 IOPS, 18.51 MiB/s [2024-11-17T13:23:34.675Z] 4744.44 IOPS, 18.53 MiB/s [2024-11-17T13:23:34.675Z] 4751.10 IOPS, 18.56 MiB/s 00:13:45.451 Latency(us) 00:13:45.451 [2024-11-17T13:23:34.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.451 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:45.451 Verification LBA range: start 0x0 length 0x2000 00:13:45.451 TLSTESTn1 : 10.02 4756.37 18.58 0.00 0.00 26865.53 5391.83 22520.55 00:13:45.451 [2024-11-17T13:23:34.675Z] =================================================================================================================== 00:13:45.451 [2024-11-17T13:23:34.675Z] Total : 4756.37 18.58 0.00 0.00 26865.53 5391.83 22520.55 00:13:45.452 { 00:13:45.452 "results": [ 00:13:45.452 { 00:13:45.452 "job": "TLSTESTn1", 00:13:45.452 "core_mask": "0x4", 00:13:45.452 "workload": "verify", 00:13:45.452 "status": "finished", 00:13:45.452 "verify_range": { 00:13:45.452 "start": 0, 00:13:45.452 "length": 8192 00:13:45.452 }, 00:13:45.452 "queue_depth": 128, 00:13:45.452 "io_size": 4096, 00:13:45.452 "runtime": 10.015623, 00:13:45.452 "iops": 4756.369124516767, 00:13:45.452 "mibps": 18.579566892643623, 00:13:45.452 "io_failed": 0, 00:13:45.452 "io_timeout": 0, 00:13:45.452 "avg_latency_us": 26865.532510409947, 00:13:45.452 "min_latency_us": 5391.825454545455, 00:13:45.452 "max_latency_us": 22520.552727272727 00:13:45.452 } 00:13:45.452 ], 00:13:45.452 "core_count": 1 00:13:45.452 } 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:45.452 nvmf_trace.0 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72633 ']' 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:45.452 killing process with pid 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72633' 00:13:45.452 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.452 00:13:45.452 Latency(us) 00:13:45.452 [2024-11-17T13:23:34.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.452 [2024-11-17T13:23:34.676Z] =================================================================================================================== 00:13:45.452 [2024-11-17T13:23:34.676Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72633 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.452 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.710 rmmod nvme_tcp 00:13:45.710 rmmod nvme_fabrics 00:13:45.710 rmmod nvme_keyring 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72599 ']' 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72599 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72599 ']' 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72599 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72599 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72599' 00:13:45.710 killing process with pid 72599 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72599 00:13:45.710 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72599 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:45.969 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:45.969 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:45.970 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:45.970 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.970 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.8BB 00:13:46.228 00:13:46.228 real 0m14.410s 00:13:46.228 user 0m19.414s 00:13:46.228 sys 0m6.249s 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:46.228 ************************************ 00:13:46.228 END TEST nvmf_fips 00:13:46.228 ************************************ 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.228 ************************************ 00:13:46.228 START TEST nvmf_control_msg_list 00:13:46.228 ************************************ 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:46.228 * Looking for test storage... 00:13:46.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.228 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.487 --rc genhtml_branch_coverage=1 00:13:46.487 --rc genhtml_function_coverage=1 00:13:46.487 --rc genhtml_legend=1 00:13:46.487 --rc geninfo_all_blocks=1 00:13:46.487 --rc geninfo_unexecuted_blocks=1 00:13:46.487 00:13:46.487 ' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.487 --rc genhtml_branch_coverage=1 00:13:46.487 --rc genhtml_function_coverage=1 00:13:46.487 --rc genhtml_legend=1 00:13:46.487 --rc geninfo_all_blocks=1 00:13:46.487 --rc geninfo_unexecuted_blocks=1 00:13:46.487 00:13:46.487 ' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.487 --rc genhtml_branch_coverage=1 00:13:46.487 --rc genhtml_function_coverage=1 00:13:46.487 --rc genhtml_legend=1 00:13:46.487 --rc geninfo_all_blocks=1 00:13:46.487 --rc geninfo_unexecuted_blocks=1 00:13:46.487 00:13:46.487 ' 00:13:46.487 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.488 --rc genhtml_branch_coverage=1 00:13:46.488 --rc genhtml_function_coverage=1 00:13:46.488 --rc genhtml_legend=1 00:13:46.488 --rc geninfo_all_blocks=1 00:13:46.488 --rc geninfo_unexecuted_blocks=1 00:13:46.488 00:13:46.488 ' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:46.488 Cannot find device "nvmf_init_br" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:46.488 Cannot find device "nvmf_init_br2" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:46.488 Cannot find device "nvmf_tgt_br" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.488 Cannot find device "nvmf_tgt_br2" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:46.488 Cannot find device "nvmf_init_br" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:46.488 Cannot find device "nvmf_init_br2" 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:13:46.488 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:46.488 Cannot find device "nvmf_tgt_br" 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:46.489 Cannot find device "nvmf_tgt_br2" 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:46.489 Cannot find device "nvmf_br" 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:46.489 Cannot find device "nvmf_init_if" 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:46.489 Cannot find device "nvmf_init_if2" 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.489 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.747 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:46.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:13:46.748 00:13:46.748 --- 10.0.0.3 ping statistics --- 00:13:46.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.748 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:46.748 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:46.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:46.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:13:46.748 00:13:46.748 --- 10.0.0.4 ping statistics --- 00:13:46.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.748 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:47.007 00:13:47.007 --- 10.0.0.1 ping statistics --- 00:13:47.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.007 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:47.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:47.007 00:13:47.007 --- 10.0.0.2 ping statistics --- 00:13:47.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.007 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.007 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73021 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73021 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73021 ']' 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:47.007 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.007 [2024-11-17 13:23:36.067174] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:47.007 [2024-11-17 13:23:36.067255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.007 [2024-11-17 13:23:36.220972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.266 [2024-11-17 13:23:36.282255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.266 [2024-11-17 13:23:36.282325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.266 [2024-11-17 13:23:36.282340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.266 [2024-11-17 13:23:36.282351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.266 [2024-11-17 13:23:36.282360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.266 [2024-11-17 13:23:36.282852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.266 [2024-11-17 13:23:36.365807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.266 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.266 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:13:47.266 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.266 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.266 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 [2024-11-17 13:23:36.494468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 Malloc0 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 [2024-11-17 13:23:36.539434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73046 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73047 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73048 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:47.525 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73046 00:13:47.525 [2024-11-17 13:23:36.737992] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:47.525 [2024-11-17 13:23:36.738179] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:47.525 [2024-11-17 13:23:36.738340] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:48.908 Initializing NVMe Controllers 00:13:48.908 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:48.908 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:13:48.908 Initialization complete. Launching workers. 00:13:48.908 ======================================================== 00:13:48.908 Latency(us) 00:13:48.908 Device Information : IOPS MiB/s Average min max 00:13:48.908 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4021.00 15.71 248.36 165.89 452.06 00:13:48.908 ======================================================== 00:13:48.908 Total : 4021.00 15.71 248.36 165.89 452.06 00:13:48.908 00:13:48.908 Initializing NVMe Controllers 00:13:48.908 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:48.908 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:13:48.908 Initialization complete. Launching workers. 00:13:48.908 ======================================================== 00:13:48.908 Latency(us) 00:13:48.908 Device Information : IOPS MiB/s Average min max 00:13:48.908 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4023.99 15.72 248.20 198.43 540.07 00:13:48.908 ======================================================== 00:13:48.908 Total : 4023.99 15.72 248.20 198.43 540.07 00:13:48.908 00:13:48.908 Initializing NVMe Controllers 00:13:48.908 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:48.908 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:13:48.908 Initialization complete. Launching workers. 00:13:48.908 ======================================================== 00:13:48.908 Latency(us) 00:13:48.908 Device Information : IOPS MiB/s Average min max 00:13:48.908 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4025.88 15.73 248.05 190.16 389.17 00:13:48.908 ======================================================== 00:13:48.908 Total : 4025.88 15.73 248.05 190.16 389.17 00:13:48.908 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73047 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73048 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.908 rmmod nvme_tcp 00:13:48.908 rmmod nvme_fabrics 00:13:48.908 rmmod nvme_keyring 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73021 ']' 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73021 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73021 ']' 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73021 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73021 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.908 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.908 killing process with pid 73021 00:13:48.909 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73021' 00:13:48.909 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73021 00:13:48.909 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73021 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.168 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:13:49.428 00:13:49.428 real 0m3.142s 00:13:49.428 user 0m4.871s 00:13:49.428 sys 0m1.413s 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.428 ************************************ 00:13:49.428 END TEST nvmf_control_msg_list 00:13:49.428 ************************************ 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.428 ************************************ 00:13:49.428 START TEST nvmf_wait_for_buf 00:13:49.428 ************************************ 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:49.428 * Looking for test storage... 00:13:49.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.428 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.688 --rc genhtml_branch_coverage=1 00:13:49.688 --rc genhtml_function_coverage=1 00:13:49.688 --rc genhtml_legend=1 00:13:49.688 --rc geninfo_all_blocks=1 00:13:49.688 --rc geninfo_unexecuted_blocks=1 00:13:49.688 00:13:49.688 ' 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.688 --rc genhtml_branch_coverage=1 00:13:49.688 --rc genhtml_function_coverage=1 00:13:49.688 --rc genhtml_legend=1 00:13:49.688 --rc geninfo_all_blocks=1 00:13:49.688 --rc geninfo_unexecuted_blocks=1 00:13:49.688 00:13:49.688 ' 00:13:49.688 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.688 --rc genhtml_branch_coverage=1 00:13:49.688 --rc genhtml_function_coverage=1 00:13:49.689 --rc genhtml_legend=1 00:13:49.689 --rc geninfo_all_blocks=1 00:13:49.689 --rc geninfo_unexecuted_blocks=1 00:13:49.689 00:13:49.689 ' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.689 --rc genhtml_branch_coverage=1 00:13:49.689 --rc genhtml_function_coverage=1 00:13:49.689 --rc genhtml_legend=1 00:13:49.689 --rc geninfo_all_blocks=1 00:13:49.689 --rc geninfo_unexecuted_blocks=1 00:13:49.689 00:13:49.689 ' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.689 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:49.689 Cannot find device "nvmf_init_br" 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:49.689 Cannot find device "nvmf_init_br2" 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:49.689 Cannot find device "nvmf_tgt_br" 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.689 Cannot find device "nvmf_tgt_br2" 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:49.689 Cannot find device "nvmf_init_br" 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:13:49.689 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:49.689 Cannot find device "nvmf_init_br2" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:49.690 Cannot find device "nvmf_tgt_br" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:49.690 Cannot find device "nvmf_tgt_br2" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:49.690 Cannot find device "nvmf_br" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:49.690 Cannot find device "nvmf_init_if" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:49.690 Cannot find device "nvmf_init_if2" 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.690 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:49.949 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:49.949 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:49.949 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.949 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.949 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:49.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:13:49.950 00:13:49.950 --- 10.0.0.3 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:49.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:13:49.950 00:13:49.950 --- 10.0.0.4 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:49.950 00:13:49.950 --- 10.0.0.1 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:49.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:49.950 00:13:49.950 --- 10.0.0.2 ping statistics --- 00:13:49.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.950 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73291 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73291 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73291 ']' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.950 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.209 [2024-11-17 13:23:39.223029] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:50.209 [2024-11-17 13:23:39.223118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.209 [2024-11-17 13:23:39.376933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.468 [2024-11-17 13:23:39.430407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.468 [2024-11-17 13:23:39.430476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.468 [2024-11-17 13:23:39.430491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.468 [2024-11-17 13:23:39.430502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.468 [2024-11-17 13:23:39.430511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.468 [2024-11-17 13:23:39.430975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 [2024-11-17 13:23:39.583612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 Malloc0 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 [2024-11-17 13:23:39.654762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.468 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:50.469 [2024-11-17 13:23:39.682895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.469 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:50.727 [2024-11-17 13:23:39.878891] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:52.106 Initializing NVMe Controllers 00:13:52.106 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:52.106 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:52.106 Initialization complete. Launching workers. 00:13:52.106 ======================================================== 00:13:52.106 Latency(us) 00:13:52.106 Device Information : IOPS MiB/s Average min max 00:13:52.106 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7975.53 7019.75 8341.33 00:13:52.106 ======================================================== 00:13:52.106 Total : 504.00 63.00 7975.53 7019.75 8341.33 00:13:52.106 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.106 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.106 rmmod nvme_tcp 00:13:52.106 rmmod nvme_fabrics 00:13:52.106 rmmod nvme_keyring 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73291 ']' 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73291 ']' 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.365 killing process with pid 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73291' 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73291 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:52.365 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:52.624 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:52.625 00:13:52.625 real 0m3.280s 00:13:52.625 user 0m2.616s 00:13:52.625 sys 0m0.825s 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 ************************************ 00:13:52.625 END TEST nvmf_wait_for_buf 00:13:52.625 ************************************ 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 ************************************ 00:13:52.625 START TEST nvmf_nsid 00:13:52.625 ************************************ 00:13:52.625 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:52.885 * Looking for test storage... 00:13:52.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.885 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.885 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.885 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.885 --rc genhtml_branch_coverage=1 00:13:52.885 --rc genhtml_function_coverage=1 00:13:52.885 --rc genhtml_legend=1 00:13:52.885 --rc geninfo_all_blocks=1 00:13:52.885 --rc geninfo_unexecuted_blocks=1 00:13:52.885 00:13:52.885 ' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.885 --rc genhtml_branch_coverage=1 00:13:52.885 --rc genhtml_function_coverage=1 00:13:52.885 --rc genhtml_legend=1 00:13:52.885 --rc geninfo_all_blocks=1 00:13:52.885 --rc geninfo_unexecuted_blocks=1 00:13:52.885 00:13:52.885 ' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.885 --rc genhtml_branch_coverage=1 00:13:52.885 --rc genhtml_function_coverage=1 00:13:52.885 --rc genhtml_legend=1 00:13:52.885 --rc geninfo_all_blocks=1 00:13:52.885 --rc geninfo_unexecuted_blocks=1 00:13:52.885 00:13:52.885 ' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.885 --rc genhtml_branch_coverage=1 00:13:52.885 --rc genhtml_function_coverage=1 00:13:52.885 --rc genhtml_legend=1 00:13:52.885 --rc geninfo_all_blocks=1 00:13:52.885 --rc geninfo_unexecuted_blocks=1 00:13:52.885 00:13:52.885 ' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.885 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:52.886 Cannot find device "nvmf_init_br" 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:52.886 Cannot find device "nvmf_init_br2" 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:52.886 Cannot find device "nvmf_tgt_br" 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:52.886 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:53.145 Cannot find device "nvmf_tgt_br2" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:53.145 Cannot find device "nvmf_init_br" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:53.145 Cannot find device "nvmf_init_br2" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:53.145 Cannot find device "nvmf_tgt_br" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:53.145 Cannot find device "nvmf_tgt_br2" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:53.145 Cannot find device "nvmf_br" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:53.145 Cannot find device "nvmf_init_if" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:53.145 Cannot find device "nvmf_init_if2" 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:53.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:53.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:53.145 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:53.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:53.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:53.405 00:13:53.405 --- 10.0.0.3 ping statistics --- 00:13:53.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.405 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:53.405 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:53.405 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:13:53.405 00:13:53.405 --- 10.0.0.4 ping statistics --- 00:13:53.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.405 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:53.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:53.405 00:13:53.405 --- 10.0.0.1 ping statistics --- 00:13:53.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.405 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:53.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:53.405 00:13:53.405 --- 10.0.0.2 ping statistics --- 00:13:53.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.405 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73548 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73548 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73548 ']' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.405 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:53.405 [2024-11-17 13:23:42.537638] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:53.405 [2024-11-17 13:23:42.537728] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.666 [2024-11-17 13:23:42.691732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.666 [2024-11-17 13:23:42.753364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.666 [2024-11-17 13:23:42.753440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.666 [2024-11-17 13:23:42.753454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.666 [2024-11-17 13:23:42.753465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.666 [2024-11-17 13:23:42.753474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.666 [2024-11-17 13:23:42.753975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.666 [2024-11-17 13:23:42.836885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73567 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5ddd2df0-49f7-43ec-94b0-33119477d9fa 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=90065155-7cd0-40b5-bcf1-12783bdcb04d 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=00caa89a-6e79-4c13-8afa-216ac3a8a18f 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.925 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:53.925 null0 00:13:53.925 null1 00:13:53.925 null2 00:13:53.925 [2024-11-17 13:23:43.022469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.925 [2024-11-17 13:23:43.037471] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:53.925 [2024-11-17 13:23:43.037563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73567 ] 00:13:53.925 [2024-11-17 13:23:43.046611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73567 /var/tmp/tgt2.sock 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73567 ']' 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.925 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:54.184 [2024-11-17 13:23:43.192559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.184 [2024-11-17 13:23:43.253232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.184 [2024-11-17 13:23:43.328735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.443 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.443 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:54.443 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:13:55.011 [2024-11-17 13:23:43.967944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.011 [2024-11-17 13:23:43.984028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:13:55.011 nvme0n1 nvme0n2 00:13:55.011 nvme1n1 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:13:55.011 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5ddd2df0-49f7-43ec-94b0-33119477d9fa 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5ddd2df049f743ec94b033119477d9fa 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5DDD2DF049F743EC94B033119477D9FA 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5DDD2DF049F743EC94B033119477D9FA == \5\D\D\D\2\D\F\0\4\9\F\7\4\3\E\C\9\4\B\0\3\3\1\1\9\4\7\7\D\9\F\A ]] 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 90065155-7cd0-40b5-bcf1-12783bdcb04d 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=900651557cd040b5bcf112783bdcb04d 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 900651557CD040B5BCF112783BDCB04D 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 900651557CD040B5BCF112783BDCB04D == \9\0\0\6\5\1\5\5\7\C\D\0\4\0\B\5\B\C\F\1\1\2\7\8\3\B\D\C\B\0\4\D ]] 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 00caa89a-6e79-4c13-8afa-216ac3a8a18f 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=00caa89a6e794c138afa216ac3a8a18f 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 00CAA89A6E794C138AFA216AC3A8A18F 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 00CAA89A6E794C138AFA216AC3A8A18F == \0\0\C\A\A\8\9\A\6\E\7\9\4\C\1\3\8\A\F\A\2\1\6\A\C\3\A\8\A\1\8\F ]] 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73567 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73567 ']' 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73567 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.390 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73567 00:13:56.650 killing process with pid 73567 00:13:56.650 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.650 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.650 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73567' 00:13:56.650 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73567 00:13:56.650 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73567 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.909 rmmod nvme_tcp 00:13:56.909 rmmod nvme_fabrics 00:13:56.909 rmmod nvme_keyring 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73548 ']' 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73548 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73548 ']' 00:13:56.909 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73548 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73548 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.169 killing process with pid 73548 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73548' 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73548 00:13:57.169 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73548 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:57.428 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.429 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.429 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.429 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:13:57.429 00:13:57.429 real 0m4.794s 00:13:57.429 user 0m7.072s 00:13:57.429 sys 0m1.750s 00:13:57.429 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.429 ************************************ 00:13:57.429 END TEST nvmf_nsid 00:13:57.429 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:57.429 ************************************ 00:13:57.688 13:23:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:57.688 ************************************ 00:13:57.688 END TEST nvmf_target_extra 00:13:57.688 ************************************ 00:13:57.688 00:13:57.688 real 4m45.429s 00:13:57.688 user 9m48.065s 00:13:57.688 sys 1m8.214s 00:13:57.688 13:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.688 13:23:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.688 13:23:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:57.688 13:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.688 13:23:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.688 13:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.688 ************************************ 00:13:57.688 START TEST nvmf_host 00:13:57.688 ************************************ 00:13:57.688 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:57.688 * Looking for test storage... 00:13:57.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:57.688 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.688 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.688 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.948 --rc genhtml_branch_coverage=1 00:13:57.948 --rc genhtml_function_coverage=1 00:13:57.948 --rc genhtml_legend=1 00:13:57.948 --rc geninfo_all_blocks=1 00:13:57.948 --rc geninfo_unexecuted_blocks=1 00:13:57.948 00:13:57.948 ' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.948 --rc genhtml_branch_coverage=1 00:13:57.948 --rc genhtml_function_coverage=1 00:13:57.948 --rc genhtml_legend=1 00:13:57.948 --rc geninfo_all_blocks=1 00:13:57.948 --rc geninfo_unexecuted_blocks=1 00:13:57.948 00:13:57.948 ' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.948 --rc genhtml_branch_coverage=1 00:13:57.948 --rc genhtml_function_coverage=1 00:13:57.948 --rc genhtml_legend=1 00:13:57.948 --rc geninfo_all_blocks=1 00:13:57.948 --rc geninfo_unexecuted_blocks=1 00:13:57.948 00:13:57.948 ' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.948 --rc genhtml_branch_coverage=1 00:13:57.948 --rc genhtml_function_coverage=1 00:13:57.948 --rc genhtml_legend=1 00:13:57.948 --rc geninfo_all_blocks=1 00:13:57.948 --rc geninfo_unexecuted_blocks=1 00:13:57.948 00:13:57.948 ' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.948 13:23:46 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.949 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:57.949 ************************************ 00:13:57.949 START TEST nvmf_identify 00:13:57.949 ************************************ 00:13:57.949 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:57.949 * Looking for test storage... 00:13:57.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.949 --rc genhtml_branch_coverage=1 00:13:57.949 --rc genhtml_function_coverage=1 00:13:57.949 --rc genhtml_legend=1 00:13:57.949 --rc geninfo_all_blocks=1 00:13:57.949 --rc geninfo_unexecuted_blocks=1 00:13:57.949 00:13:57.949 ' 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.949 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.209 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.209 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:58.210 Cannot find device "nvmf_init_br" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:58.210 Cannot find device "nvmf_init_br2" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:58.210 Cannot find device "nvmf_tgt_br" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.210 Cannot find device "nvmf_tgt_br2" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:58.210 Cannot find device "nvmf_init_br" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:58.210 Cannot find device "nvmf_init_br2" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:58.210 Cannot find device "nvmf_tgt_br" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:58.210 Cannot find device "nvmf_tgt_br2" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:58.210 Cannot find device "nvmf_br" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:58.210 Cannot find device "nvmf_init_if" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:58.210 Cannot find device "nvmf_init_if2" 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:58.210 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:58.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:13:58.470 00:13:58.470 --- 10.0.0.3 ping statistics --- 00:13:58.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.470 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:58.470 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:58.470 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:13:58.470 00:13:58.470 --- 10.0.0.4 ping statistics --- 00:13:58.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.470 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:58.470 00:13:58.470 --- 10.0.0.1 ping statistics --- 00:13:58.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.470 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:58.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:58.470 00:13:58.470 --- 10.0.0.2 ping statistics --- 00:13:58.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.470 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73923 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73923 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.470 13:23:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 [2024-11-17 13:23:47.710926] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:58.730 [2024-11-17 13:23:47.711019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.730 [2024-11-17 13:23:47.865596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.730 [2024-11-17 13:23:47.928513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.730 [2024-11-17 13:23:47.928597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.730 [2024-11-17 13:23:47.928612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.730 [2024-11-17 13:23:47.928623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.730 [2024-11-17 13:23:47.928633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.730 [2024-11-17 13:23:47.930238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.730 [2024-11-17 13:23:47.930399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.730 [2024-11-17 13:23:47.930530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.730 [2024-11-17 13:23:47.930533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.990 [2024-11-17 13:23:48.015066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:58.990 [2024-11-17 13:23:48.105694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:58.990 Malloc0 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.990 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.252 [2024-11-17 13:23:48.228694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.252 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.252 [ 00:13:59.252 { 00:13:59.253 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:59.253 "subtype": "Discovery", 00:13:59.253 "listen_addresses": [ 00:13:59.253 { 00:13:59.253 "trtype": "TCP", 00:13:59.253 "adrfam": "IPv4", 00:13:59.253 "traddr": "10.0.0.3", 00:13:59.253 "trsvcid": "4420" 00:13:59.253 } 00:13:59.253 ], 00:13:59.253 "allow_any_host": true, 00:13:59.253 "hosts": [] 00:13:59.253 }, 00:13:59.253 { 00:13:59.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.253 "subtype": "NVMe", 00:13:59.253 "listen_addresses": [ 00:13:59.253 { 00:13:59.253 "trtype": "TCP", 00:13:59.253 "adrfam": "IPv4", 00:13:59.253 "traddr": "10.0.0.3", 00:13:59.253 "trsvcid": "4420" 00:13:59.253 } 00:13:59.253 ], 00:13:59.253 "allow_any_host": true, 00:13:59.253 "hosts": [], 00:13:59.253 "serial_number": "SPDK00000000000001", 00:13:59.253 "model_number": "SPDK bdev Controller", 00:13:59.253 "max_namespaces": 32, 00:13:59.253 "min_cntlid": 1, 00:13:59.253 "max_cntlid": 65519, 00:13:59.253 "namespaces": [ 00:13:59.253 { 00:13:59.253 "nsid": 1, 00:13:59.253 "bdev_name": "Malloc0", 00:13:59.253 "name": "Malloc0", 00:13:59.253 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:59.253 "eui64": "ABCDEF0123456789", 00:13:59.253 "uuid": "53ce54e5-43a5-420a-8a17-fbbdd6ffbfe1" 00:13:59.253 } 00:13:59.253 ] 00:13:59.253 } 00:13:59.253 ] 00:13:59.253 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.253 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:59.253 [2024-11-17 13:23:48.282645] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:59.253 [2024-11-17 13:23:48.282714] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73950 ] 00:13:59.253 [2024-11-17 13:23:48.433563] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:13:59.253 [2024-11-17 13:23:48.433633] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:59.253 [2024-11-17 13:23:48.433639] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:59.253 [2024-11-17 13:23:48.433649] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:59.253 [2024-11-17 13:23:48.433660] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:59.253 [2024-11-17 13:23:48.434008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:13:59.253 [2024-11-17 13:23:48.434079] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x63b750 0 00:13:59.253 [2024-11-17 13:23:48.439782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:59.253 [2024-11-17 13:23:48.439804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:59.253 [2024-11-17 13:23:48.439819] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:59.253 [2024-11-17 13:23:48.439822] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:59.253 [2024-11-17 13:23:48.439854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.439861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.439865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.439880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:59.253 [2024-11-17 13:23:48.439911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.447811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.447828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.253 [2024-11-17 13:23:48.447844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.447849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.253 [2024-11-17 13:23:48.447863] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:59.253 [2024-11-17 13:23:48.447871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:13:59.253 [2024-11-17 13:23:48.447877] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:13:59.253 [2024-11-17 13:23:48.447892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.447897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.447901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.447909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.253 [2024-11-17 13:23:48.447935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.448010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.448017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.253 [2024-11-17 13:23:48.448020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.253 [2024-11-17 13:23:48.448029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:13:59.253 [2024-11-17 13:23:48.448035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:13:59.253 [2024-11-17 13:23:48.448043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.448056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.253 [2024-11-17 13:23:48.448073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.448177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.448184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.253 [2024-11-17 13:23:48.448187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.253 [2024-11-17 13:23:48.448197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:13:59.253 [2024-11-17 13:23:48.448204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:59.253 [2024-11-17 13:23:48.448211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.448225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.253 [2024-11-17 13:23:48.448243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.448307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.448313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.253 [2024-11-17 13:23:48.448316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.253 [2024-11-17 13:23:48.448325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:59.253 [2024-11-17 13:23:48.448335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.448349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.253 [2024-11-17 13:23:48.448365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.448424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.448430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.253 [2024-11-17 13:23:48.448434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.253 [2024-11-17 13:23:48.448442] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:59.253 [2024-11-17 13:23:48.448447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:59.253 [2024-11-17 13:23:48.448455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:59.253 [2024-11-17 13:23:48.448573] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:13:59.253 [2024-11-17 13:23:48.448579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:59.253 [2024-11-17 13:23:48.448589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.253 [2024-11-17 13:23:48.448596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.253 [2024-11-17 13:23:48.448602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.253 [2024-11-17 13:23:48.448619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.253 [2024-11-17 13:23:48.448671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.253 [2024-11-17 13:23:48.448677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.448680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.448684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.448688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:59.254 [2024-11-17 13:23:48.448697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.448701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.448704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.448711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.254 [2024-11-17 13:23:48.448727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.254 [2024-11-17 13:23:48.448818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.254 [2024-11-17 13:23:48.448826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.448829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.448833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.448838] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:59.254 [2024-11-17 13:23:48.448843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.448850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:13:59.254 [2024-11-17 13:23:48.448866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.448877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.448881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.448889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.254 [2024-11-17 13:23:48.448908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.254 [2024-11-17 13:23:48.448994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.254 [2024-11-17 13:23:48.449000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.254 [2024-11-17 13:23:48.449004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63b750): datao=0, datal=4096, cccid=0 00:13:59.254 [2024-11-17 13:23:48.449012] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x69f740) on tqpair(0x63b750): expected_datao=0, payload_size=4096 00:13:59.254 [2024-11-17 13:23:48.449017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449025] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449029] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.254 [2024-11-17 13:23:48.449044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.449047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.449059] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:13:59.254 [2024-11-17 13:23:48.449065] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:13:59.254 [2024-11-17 13:23:48.449069] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:13:59.254 [2024-11-17 13:23:48.449075] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:13:59.254 [2024-11-17 13:23:48.449079] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:13:59.254 [2024-11-17 13:23:48.449084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.449097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.449105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:59.254 [2024-11-17 13:23:48.449139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.254 [2024-11-17 13:23:48.449198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.254 [2024-11-17 13:23:48.449204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.449208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.449219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.254 [2024-11-17 13:23:48.449240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.254 [2024-11-17 13:23:48.449258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.254 [2024-11-17 13:23:48.449275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.254 [2024-11-17 13:23:48.449291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.449303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:59.254 [2024-11-17 13:23:48.449311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.254 [2024-11-17 13:23:48.449340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f740, cid 0, qid 0 00:13:59.254 [2024-11-17 13:23:48.449346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69f8c0, cid 1, qid 0 00:13:59.254 [2024-11-17 13:23:48.449350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fa40, cid 2, qid 0 00:13:59.254 [2024-11-17 13:23:48.449355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.254 [2024-11-17 13:23:48.449359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fd40, cid 4, qid 0 00:13:59.254 [2024-11-17 13:23:48.449462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.254 [2024-11-17 13:23:48.449468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.449471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fd40) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.449481] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:13:59.254 [2024-11-17 13:23:48.449486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:13:59.254 [2024-11-17 13:23:48.449497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63b750) 00:13:59.254 [2024-11-17 13:23:48.449508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.254 [2024-11-17 13:23:48.449525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fd40, cid 4, qid 0 00:13:59.254 [2024-11-17 13:23:48.449592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.254 [2024-11-17 13:23:48.449598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.254 [2024-11-17 13:23:48.449601] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449605] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63b750): datao=0, datal=4096, cccid=4 00:13:59.254 [2024-11-17 13:23:48.449609] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x69fd40) on tqpair(0x63b750): expected_datao=0, payload_size=4096 00:13:59.254 [2024-11-17 13:23:48.449613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449620] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449623] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.254 [2024-11-17 13:23:48.449636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.254 [2024-11-17 13:23:48.449639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.254 [2024-11-17 13:23:48.449643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fd40) on tqpair=0x63b750 00:13:59.254 [2024-11-17 13:23:48.449656] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:13:59.254 [2024-11-17 13:23:48.449689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63b750) 00:13:59.255 [2024-11-17 13:23:48.449701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.255 [2024-11-17 13:23:48.449708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x63b750) 00:13:59.255 [2024-11-17 13:23:48.449721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.255 [2024-11-17 13:23:48.449745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fd40, cid 4, qid 0 00:13:59.255 [2024-11-17 13:23:48.449752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fec0, cid 5, qid 0 00:13:59.255 [2024-11-17 13:23:48.449876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.255 [2024-11-17 13:23:48.449891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.255 [2024-11-17 13:23:48.449895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449898] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63b750): datao=0, datal=1024, cccid=4 00:13:59.255 [2024-11-17 13:23:48.449902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x69fd40) on tqpair(0x63b750): expected_datao=0, payload_size=1024 00:13:59.255 [2024-11-17 13:23:48.449907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.255 [2024-11-17 13:23:48.449927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.255 [2024-11-17 13:23:48.449931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fec0) on tqpair=0x63b750 00:13:59.255 [2024-11-17 13:23:48.449953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.255 [2024-11-17 13:23:48.449975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.255 [2024-11-17 13:23:48.449978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.449982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fd40) on tqpair=0x63b750 00:13:59.255 [2024-11-17 13:23:48.450001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63b750) 00:13:59.255 [2024-11-17 13:23:48.450012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.255 [2024-11-17 13:23:48.450035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fd40, cid 4, qid 0 00:13:59.255 [2024-11-17 13:23:48.450106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.255 [2024-11-17 13:23:48.450112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.255 [2024-11-17 13:23:48.450115] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450118] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63b750): datao=0, datal=3072, cccid=4 00:13:59.255 [2024-11-17 13:23:48.450123] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x69fd40) on tqpair(0x63b750): expected_datao=0, payload_size=3072 00:13:59.255 [2024-11-17 13:23:48.450127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450133] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450136] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.255 [2024-11-17 13:23:48.450148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.255 [2024-11-17 13:23:48.450151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fd40) on tqpair=0x63b750 00:13:59.255 [2024-11-17 13:23:48.450164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x63b750) 00:13:59.255 [2024-11-17 13:23:48.450174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.255 [2024-11-17 13:23:48.450195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fd40, cid 4, qid 0 00:13:59.255 [2024-11-17 13:23:48.450262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.255 [2024-11-17 13:23:48.450267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.255 [2024-11-17 13:23:48.450271] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x63b750): datao=0, datal=8, cccid=4 00:13:59.255 [2024-11-17 13:23:48.450278] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x69fd40) on tqpair(0x63b750): expected_datao=0, payload_size=8 00:13:59.255 [2024-11-17 13:23:48.450282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.255 [2024-11-17 13:23:48.450311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.255 [2024-11-17 13:23:48.450314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.255 [2024-11-17 13:23:48.450318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fd40) on tqpair=0x63b750 00:13:59.255 ===================================================== 00:13:59.255 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:59.255 ===================================================== 00:13:59.255 Controller Capabilities/Features 00:13:59.255 ================================ 00:13:59.255 Vendor ID: 0000 00:13:59.255 Subsystem Vendor ID: 0000 00:13:59.255 Serial Number: .................... 00:13:59.255 Model Number: ........................................ 00:13:59.255 Firmware Version: 25.01 00:13:59.255 Recommended Arb Burst: 0 00:13:59.255 IEEE OUI Identifier: 00 00 00 00:13:59.255 Multi-path I/O 00:13:59.255 May have multiple subsystem ports: No 00:13:59.255 May have multiple controllers: No 00:13:59.255 Associated with SR-IOV VF: No 00:13:59.255 Max Data Transfer Size: 131072 00:13:59.255 Max Number of Namespaces: 0 00:13:59.255 Max Number of I/O Queues: 1024 00:13:59.255 NVMe Specification Version (VS): 1.3 00:13:59.255 NVMe Specification Version (Identify): 1.3 00:13:59.255 Maximum Queue Entries: 128 00:13:59.255 Contiguous Queues Required: Yes 00:13:59.255 Arbitration Mechanisms Supported 00:13:59.255 Weighted Round Robin: Not Supported 00:13:59.255 Vendor Specific: Not Supported 00:13:59.255 Reset Timeout: 15000 ms 00:13:59.255 Doorbell Stride: 4 bytes 00:13:59.255 NVM Subsystem Reset: Not Supported 00:13:59.255 Command Sets Supported 00:13:59.255 NVM Command Set: Supported 00:13:59.255 Boot Partition: Not Supported 00:13:59.255 Memory Page Size Minimum: 4096 bytes 00:13:59.255 Memory Page Size Maximum: 4096 bytes 00:13:59.255 Persistent Memory Region: Not Supported 00:13:59.255 Optional Asynchronous Events Supported 00:13:59.255 Namespace Attribute Notices: Not Supported 00:13:59.255 Firmware Activation Notices: Not Supported 00:13:59.255 ANA Change Notices: Not Supported 00:13:59.255 PLE Aggregate Log Change Notices: Not Supported 00:13:59.255 LBA Status Info Alert Notices: Not Supported 00:13:59.255 EGE Aggregate Log Change Notices: Not Supported 00:13:59.255 Normal NVM Subsystem Shutdown event: Not Supported 00:13:59.255 Zone Descriptor Change Notices: Not Supported 00:13:59.255 Discovery Log Change Notices: Supported 00:13:59.255 Controller Attributes 00:13:59.255 128-bit Host Identifier: Not Supported 00:13:59.255 Non-Operational Permissive Mode: Not Supported 00:13:59.255 NVM Sets: Not Supported 00:13:59.255 Read Recovery Levels: Not Supported 00:13:59.255 Endurance Groups: Not Supported 00:13:59.255 Predictable Latency Mode: Not Supported 00:13:59.255 Traffic Based Keep ALive: Not Supported 00:13:59.255 Namespace Granularity: Not Supported 00:13:59.255 SQ Associations: Not Supported 00:13:59.255 UUID List: Not Supported 00:13:59.255 Multi-Domain Subsystem: Not Supported 00:13:59.255 Fixed Capacity Management: Not Supported 00:13:59.255 Variable Capacity Management: Not Supported 00:13:59.255 Delete Endurance Group: Not Supported 00:13:59.255 Delete NVM Set: Not Supported 00:13:59.255 Extended LBA Formats Supported: Not Supported 00:13:59.255 Flexible Data Placement Supported: Not Supported 00:13:59.255 00:13:59.255 Controller Memory Buffer Support 00:13:59.255 ================================ 00:13:59.255 Supported: No 00:13:59.255 00:13:59.255 Persistent Memory Region Support 00:13:59.255 ================================ 00:13:59.255 Supported: No 00:13:59.255 00:13:59.255 Admin Command Set Attributes 00:13:59.255 ============================ 00:13:59.255 Security Send/Receive: Not Supported 00:13:59.255 Format NVM: Not Supported 00:13:59.255 Firmware Activate/Download: Not Supported 00:13:59.255 Namespace Management: Not Supported 00:13:59.255 Device Self-Test: Not Supported 00:13:59.255 Directives: Not Supported 00:13:59.255 NVMe-MI: Not Supported 00:13:59.255 Virtualization Management: Not Supported 00:13:59.255 Doorbell Buffer Config: Not Supported 00:13:59.255 Get LBA Status Capability: Not Supported 00:13:59.255 Command & Feature Lockdown Capability: Not Supported 00:13:59.255 Abort Command Limit: 1 00:13:59.255 Async Event Request Limit: 4 00:13:59.255 Number of Firmware Slots: N/A 00:13:59.255 Firmware Slot 1 Read-Only: N/A 00:13:59.255 Firmware Activation Without Reset: N/A 00:13:59.256 Multiple Update Detection Support: N/A 00:13:59.256 Firmware Update Granularity: No Information Provided 00:13:59.256 Per-Namespace SMART Log: No 00:13:59.256 Asymmetric Namespace Access Log Page: Not Supported 00:13:59.256 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:59.256 Command Effects Log Page: Not Supported 00:13:59.256 Get Log Page Extended Data: Supported 00:13:59.256 Telemetry Log Pages: Not Supported 00:13:59.256 Persistent Event Log Pages: Not Supported 00:13:59.256 Supported Log Pages Log Page: May Support 00:13:59.256 Commands Supported & Effects Log Page: Not Supported 00:13:59.256 Feature Identifiers & Effects Log Page:May Support 00:13:59.256 NVMe-MI Commands & Effects Log Page: May Support 00:13:59.256 Data Area 4 for Telemetry Log: Not Supported 00:13:59.256 Error Log Page Entries Supported: 128 00:13:59.256 Keep Alive: Not Supported 00:13:59.256 00:13:59.256 NVM Command Set Attributes 00:13:59.256 ========================== 00:13:59.256 Submission Queue Entry Size 00:13:59.256 Max: 1 00:13:59.256 Min: 1 00:13:59.256 Completion Queue Entry Size 00:13:59.256 Max: 1 00:13:59.256 Min: 1 00:13:59.256 Number of Namespaces: 0 00:13:59.256 Compare Command: Not Supported 00:13:59.256 Write Uncorrectable Command: Not Supported 00:13:59.256 Dataset Management Command: Not Supported 00:13:59.256 Write Zeroes Command: Not Supported 00:13:59.256 Set Features Save Field: Not Supported 00:13:59.256 Reservations: Not Supported 00:13:59.256 Timestamp: Not Supported 00:13:59.256 Copy: Not Supported 00:13:59.256 Volatile Write Cache: Not Present 00:13:59.256 Atomic Write Unit (Normal): 1 00:13:59.256 Atomic Write Unit (PFail): 1 00:13:59.256 Atomic Compare & Write Unit: 1 00:13:59.256 Fused Compare & Write: Supported 00:13:59.256 Scatter-Gather List 00:13:59.256 SGL Command Set: Supported 00:13:59.256 SGL Keyed: Supported 00:13:59.256 SGL Bit Bucket Descriptor: Not Supported 00:13:59.256 SGL Metadata Pointer: Not Supported 00:13:59.256 Oversized SGL: Not Supported 00:13:59.256 SGL Metadata Address: Not Supported 00:13:59.256 SGL Offset: Supported 00:13:59.256 Transport SGL Data Block: Not Supported 00:13:59.256 Replay Protected Memory Block: Not Supported 00:13:59.256 00:13:59.256 Firmware Slot Information 00:13:59.256 ========================= 00:13:59.256 Active slot: 0 00:13:59.256 00:13:59.256 00:13:59.256 Error Log 00:13:59.256 ========= 00:13:59.256 00:13:59.256 Active Namespaces 00:13:59.256 ================= 00:13:59.256 Discovery Log Page 00:13:59.256 ================== 00:13:59.256 Generation Counter: 2 00:13:59.256 Number of Records: 2 00:13:59.256 Record Format: 0 00:13:59.256 00:13:59.256 Discovery Log Entry 0 00:13:59.256 ---------------------- 00:13:59.256 Transport Type: 3 (TCP) 00:13:59.256 Address Family: 1 (IPv4) 00:13:59.256 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:59.256 Entry Flags: 00:13:59.256 Duplicate Returned Information: 1 00:13:59.256 Explicit Persistent Connection Support for Discovery: 1 00:13:59.256 Transport Requirements: 00:13:59.256 Secure Channel: Not Required 00:13:59.256 Port ID: 0 (0x0000) 00:13:59.256 Controller ID: 65535 (0xffff) 00:13:59.256 Admin Max SQ Size: 128 00:13:59.256 Transport Service Identifier: 4420 00:13:59.256 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:59.256 Transport Address: 10.0.0.3 00:13:59.256 Discovery Log Entry 1 00:13:59.256 ---------------------- 00:13:59.256 Transport Type: 3 (TCP) 00:13:59.256 Address Family: 1 (IPv4) 00:13:59.256 Subsystem Type: 2 (NVM Subsystem) 00:13:59.256 Entry Flags: 00:13:59.256 Duplicate Returned Information: 0 00:13:59.256 Explicit Persistent Connection Support for Discovery: 0 00:13:59.256 Transport Requirements: 00:13:59.256 Secure Channel: Not Required 00:13:59.256 Port ID: 0 (0x0000) 00:13:59.256 Controller ID: 65535 (0xffff) 00:13:59.256 Admin Max SQ Size: 128 00:13:59.256 Transport Service Identifier: 4420 00:13:59.256 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:59.256 Transport Address: 10.0.0.3 [2024-11-17 13:23:48.450434] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:13:59.256 [2024-11-17 13:23:48.450450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f740) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.256 [2024-11-17 13:23:48.450463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69f8c0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.256 [2024-11-17 13:23:48.450471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fa40) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.256 [2024-11-17 13:23:48.450480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.256 [2024-11-17 13:23:48.450492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.256 [2024-11-17 13:23:48.450506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.256 [2024-11-17 13:23:48.450530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.256 [2024-11-17 13:23:48.450601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.256 [2024-11-17 13:23:48.450607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.256 [2024-11-17 13:23:48.450610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.256 [2024-11-17 13:23:48.450634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.256 [2024-11-17 13:23:48.450653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.256 [2024-11-17 13:23:48.450722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.256 [2024-11-17 13:23:48.450728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.256 [2024-11-17 13:23:48.450731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450739] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:13:59.256 [2024-11-17 13:23:48.450743] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:13:59.256 [2024-11-17 13:23:48.450752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.256 [2024-11-17 13:23:48.450790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.256 [2024-11-17 13:23:48.450808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.256 [2024-11-17 13:23:48.450871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.256 [2024-11-17 13:23:48.450876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.256 [2024-11-17 13:23:48.450880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.256 [2024-11-17 13:23:48.450906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.256 [2024-11-17 13:23:48.450922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.256 [2024-11-17 13:23:48.450977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.256 [2024-11-17 13:23:48.450983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.256 [2024-11-17 13:23:48.450986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.450989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.256 [2024-11-17 13:23:48.450998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.451002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.256 [2024-11-17 13:23:48.451005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.256 [2024-11-17 13:23:48.451011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.451664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.451711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.451716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.451719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.451731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.451738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.451744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.455769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.455797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.455804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.455807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.455811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.455822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.455827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.455830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x63b750) 00:13:59.257 [2024-11-17 13:23:48.455838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.257 [2024-11-17 13:23:48.455860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x69fbc0, cid 3, qid 0 00:13:59.257 [2024-11-17 13:23:48.455914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.257 [2024-11-17 13:23:48.455920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.257 [2024-11-17 13:23:48.455923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.257 [2024-11-17 13:23:48.455926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x69fbc0) on tqpair=0x63b750 00:13:59.257 [2024-11-17 13:23:48.455934] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:13:59.520 00:13:59.520 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:59.520 [2024-11-17 13:23:48.499537] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:59.520 [2024-11-17 13:23:48.499603] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73958 ] 00:13:59.520 [2024-11-17 13:23:48.653169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:13:59.520 [2024-11-17 13:23:48.653225] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:59.520 [2024-11-17 13:23:48.653231] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:59.520 [2024-11-17 13:23:48.653240] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:59.520 [2024-11-17 13:23:48.653249] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:59.520 [2024-11-17 13:23:48.653482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:13:59.520 [2024-11-17 13:23:48.653530] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d36750 0 00:13:59.520 [2024-11-17 13:23:48.660867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:59.520 [2024-11-17 13:23:48.660888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:59.520 [2024-11-17 13:23:48.660893] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:59.520 [2024-11-17 13:23:48.660896] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:59.520 [2024-11-17 13:23:48.660926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.660932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.660936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.520 [2024-11-17 13:23:48.660946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:59.520 [2024-11-17 13:23:48.660974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.520 [2024-11-17 13:23:48.668806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.520 [2024-11-17 13:23:48.668824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.520 [2024-11-17 13:23:48.668828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.668832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.520 [2024-11-17 13:23:48.668845] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:59.520 [2024-11-17 13:23:48.668851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:13:59.520 [2024-11-17 13:23:48.668857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:13:59.520 [2024-11-17 13:23:48.668870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.668875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.668878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.520 [2024-11-17 13:23:48.668886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.520 [2024-11-17 13:23:48.668911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.520 [2024-11-17 13:23:48.668966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.520 [2024-11-17 13:23:48.668972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.520 [2024-11-17 13:23:48.668976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.668980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.520 [2024-11-17 13:23:48.668984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:13:59.520 [2024-11-17 13:23:48.668991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:13:59.520 [2024-11-17 13:23:48.668998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.669002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.669005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.520 [2024-11-17 13:23:48.669011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.520 [2024-11-17 13:23:48.669027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.520 [2024-11-17 13:23:48.669079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.520 [2024-11-17 13:23:48.669085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.520 [2024-11-17 13:23:48.669088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.669092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.520 [2024-11-17 13:23:48.669096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:13:59.520 [2024-11-17 13:23:48.669104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:59.520 [2024-11-17 13:23:48.669110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.669114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.520 [2024-11-17 13:23:48.669117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.521 [2024-11-17 13:23:48.669138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.669201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:59.521 [2024-11-17 13:23:48.669210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.521 [2024-11-17 13:23:48.669237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.669296] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:59.521 [2024-11-17 13:23:48.669300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:59.521 [2024-11-17 13:23:48.669307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:59.521 [2024-11-17 13:23:48.669417] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:13:59.521 [2024-11-17 13:23:48.669423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:59.521 [2024-11-17 13:23:48.669430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.521 [2024-11-17 13:23:48.669460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.669525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:59.521 [2024-11-17 13:23:48.669534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.521 [2024-11-17 13:23:48.669562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.669625] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:59.521 [2024-11-17 13:23:48.669631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.669638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:13:59.521 [2024-11-17 13:23:48.669651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.669660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.521 [2024-11-17 13:23:48.669687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.521 [2024-11-17 13:23:48.669803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.521 [2024-11-17 13:23:48.669806] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669810] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=4096, cccid=0 00:13:59.521 [2024-11-17 13:23:48.669814] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9a740) on tqpair(0x1d36750): expected_datao=0, payload_size=4096 00:13:59.521 [2024-11-17 13:23:48.669818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669825] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669829] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.669855] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:13:59.521 [2024-11-17 13:23:48.669860] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:13:59.521 [2024-11-17 13:23:48.669864] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:13:59.521 [2024-11-17 13:23:48.669867] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:13:59.521 [2024-11-17 13:23:48.669871] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:13:59.521 [2024-11-17 13:23:48.669876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.669888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.669895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.669908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:59.521 [2024-11-17 13:23:48.669927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.521 [2024-11-17 13:23:48.669982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.521 [2024-11-17 13:23:48.669988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.521 [2024-11-17 13:23:48.669991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.669994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.521 [2024-11-17 13:23:48.670000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.670013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.521 [2024-11-17 13:23:48.670018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.670030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.521 [2024-11-17 13:23:48.670035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.670046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.521 [2024-11-17 13:23:48.670051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.670062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.521 [2024-11-17 13:23:48.670066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.670078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:59.521 [2024-11-17 13:23:48.670084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.521 [2024-11-17 13:23:48.670088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.521 [2024-11-17 13:23:48.670093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.522 [2024-11-17 13:23:48.670111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a740, cid 0, qid 0 00:13:59.522 [2024-11-17 13:23:48.670116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9a8c0, cid 1, qid 0 00:13:59.522 [2024-11-17 13:23:48.670120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aa40, cid 2, qid 0 00:13:59.522 [2024-11-17 13:23:48.670124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.522 [2024-11-17 13:23:48.670129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.670218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.522 [2024-11-17 13:23:48.670224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.522 [2024-11-17 13:23:48.670227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.522 [2024-11-17 13:23:48.670235] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:13:59.522 [2024-11-17 13:23:48.670240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.670278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:59.522 [2024-11-17 13:23:48.670293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.670347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.522 [2024-11-17 13:23:48.670353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.522 [2024-11-17 13:23:48.670356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.522 [2024-11-17 13:23:48.670413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.670449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.522 [2024-11-17 13:23:48.670466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.670527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.522 [2024-11-17 13:23:48.670541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.522 [2024-11-17 13:23:48.670544] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=4096, cccid=4 00:13:59.522 [2024-11-17 13:23:48.670552] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9ad40) on tqpair(0x1d36750): expected_datao=0, payload_size=4096 00:13:59.522 [2024-11-17 13:23:48.670556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670566] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.522 [2024-11-17 13:23:48.670578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.522 [2024-11-17 13:23:48.670581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.522 [2024-11-17 13:23:48.670599] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:13:59.522 [2024-11-17 13:23:48.670610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.670638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.522 [2024-11-17 13:23:48.670655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.670772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.522 [2024-11-17 13:23:48.670779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.522 [2024-11-17 13:23:48.670782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670786] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=4096, cccid=4 00:13:59.522 [2024-11-17 13:23:48.670790] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9ad40) on tqpair(0x1d36750): expected_datao=0, payload_size=4096 00:13:59.522 [2024-11-17 13:23:48.670794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670800] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670803] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.522 [2024-11-17 13:23:48.670815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.522 [2024-11-17 13:23:48.670818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.522 [2024-11-17 13:23:48.670840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.670859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.670869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.522 [2024-11-17 13:23:48.670887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.670953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.522 [2024-11-17 13:23:48.670959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.522 [2024-11-17 13:23:48.670962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=4096, cccid=4 00:13:59.522 [2024-11-17 13:23:48.670969] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9ad40) on tqpair(0x1d36750): expected_datao=0, payload_size=4096 00:13:59.522 [2024-11-17 13:23:48.670973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670979] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670982] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.670989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.522 [2024-11-17 13:23:48.670994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.522 [2024-11-17 13:23:48.670997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.671001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.522 [2024-11-17 13:23:48.671009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671050] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:13:59.522 [2024-11-17 13:23:48.671054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:13:59.522 [2024-11-17 13:23:48.671059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:13:59.522 [2024-11-17 13:23:48.671072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.671076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.671082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.522 [2024-11-17 13:23:48.671088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.671092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.522 [2024-11-17 13:23:48.671095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d36750) 00:13:59.522 [2024-11-17 13:23:48.671100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:59.522 [2024-11-17 13:23:48.671121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.522 [2024-11-17 13:23:48.671128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aec0, cid 5, qid 0 00:13:59.522 [2024-11-17 13:23:48.671194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aec0) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aec0, cid 5, qid 0 00:13:59.523 [2024-11-17 13:23:48.671307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aec0) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aec0, cid 5, qid 0 00:13:59.523 [2024-11-17 13:23:48.671400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aec0) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aec0, cid 5, qid 0 00:13:59.523 [2024-11-17 13:23:48.671501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aec0) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d36750) 00:13:59.523 [2024-11-17 13:23:48.671588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.523 [2024-11-17 13:23:48.671605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9aec0, cid 5, qid 0 00:13:59.523 [2024-11-17 13:23:48.671611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9ad40, cid 4, qid 0 00:13:59.523 [2024-11-17 13:23:48.671615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9b040, cid 6, qid 0 00:13:59.523 [2024-11-17 13:23:48.671619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9b1c0, cid 7, qid 0 00:13:59.523 [2024-11-17 13:23:48.671756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.523 [2024-11-17 13:23:48.671777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.523 [2024-11-17 13:23:48.671780] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671783] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=8192, cccid=5 00:13:59.523 [2024-11-17 13:23:48.671787] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9aec0) on tqpair(0x1d36750): expected_datao=0, payload_size=8192 00:13:59.523 [2024-11-17 13:23:48.671792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671806] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671811] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.523 [2024-11-17 13:23:48.671821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.523 [2024-11-17 13:23:48.671824] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=512, cccid=4 00:13:59.523 [2024-11-17 13:23:48.671831] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9ad40) on tqpair(0x1d36750): expected_datao=0, payload_size=512 00:13:59.523 [2024-11-17 13:23:48.671835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671840] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671843] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.523 [2024-11-17 13:23:48.671852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.523 [2024-11-17 13:23:48.671855] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671858] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=512, cccid=6 00:13:59.523 [2024-11-17 13:23:48.671862] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9b040) on tqpair(0x1d36750): expected_datao=0, payload_size=512 00:13:59.523 [2024-11-17 13:23:48.671865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671870] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:59.523 [2024-11-17 13:23:48.671882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:59.523 [2024-11-17 13:23:48.671885] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d36750): datao=0, datal=4096, cccid=7 00:13:59.523 [2024-11-17 13:23:48.671892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d9b1c0) on tqpair(0x1d36750): expected_datao=0, payload_size=4096 00:13:59.523 [2024-11-17 13:23:48.671895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671900] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aec0) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 [2024-11-17 13:23:48.671941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.523 [2024-11-17 13:23:48.671944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.523 [2024-11-17 13:23:48.671947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9ad40) on tqpair=0x1d36750 00:13:59.523 [2024-11-17 13:23:48.671958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.523 ===================================================== 00:13:59.523 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.523 ===================================================== 00:13:59.523 Controller Capabilities/Features 00:13:59.523 ================================ 00:13:59.523 Vendor ID: 8086 00:13:59.523 Subsystem Vendor ID: 8086 00:13:59.523 Serial Number: SPDK00000000000001 00:13:59.523 Model Number: SPDK bdev Controller 00:13:59.523 Firmware Version: 25.01 00:13:59.523 Recommended Arb Burst: 6 00:13:59.523 IEEE OUI Identifier: e4 d2 5c 00:13:59.523 Multi-path I/O 00:13:59.523 May have multiple subsystem ports: Yes 00:13:59.523 May have multiple controllers: Yes 00:13:59.523 Associated with SR-IOV VF: No 00:13:59.523 Max Data Transfer Size: 131072 00:13:59.523 Max Number of Namespaces: 32 00:13:59.523 Max Number of I/O Queues: 127 00:13:59.523 NVMe Specification Version (VS): 1.3 00:13:59.523 NVMe Specification Version (Identify): 1.3 00:13:59.523 Maximum Queue Entries: 128 00:13:59.523 Contiguous Queues Required: Yes 00:13:59.523 Arbitration Mechanisms Supported 00:13:59.523 Weighted Round Robin: Not Supported 00:13:59.523 Vendor Specific: Not Supported 00:13:59.523 Reset Timeout: 15000 ms 00:13:59.523 Doorbell Stride: 4 bytes 00:13:59.523 NVM Subsystem Reset: Not Supported 00:13:59.523 Command Sets Supported 00:13:59.524 NVM Command Set: Supported 00:13:59.524 Boot Partition: Not Supported 00:13:59.524 Memory Page Size Minimum: 4096 bytes 00:13:59.524 Memory Page Size Maximum: 4096 bytes 00:13:59.524 Persistent Memory Region: Not Supported 00:13:59.524 Optional Asynchronous Events Supported 00:13:59.524 Namespace Attribute Notices: Supported 00:13:59.524 Firmware Activation Notices: Not Supported 00:13:59.524 ANA Change Notices: Not Supported 00:13:59.524 PLE Aggregate Log Change Notices: Not Supported 00:13:59.524 LBA Status Info Alert Notices: Not Supported 00:13:59.524 EGE Aggregate Log Change Notices: Not Supported 00:13:59.524 Normal NVM Subsystem Shutdown event: Not Supported 00:13:59.524 Zone Descriptor Change Notices: Not Supported 00:13:59.524 Discovery Log Change Notices: Not Supported 00:13:59.524 Controller Attributes 00:13:59.524 128-bit Host Identifier: Supported 00:13:59.524 Non-Operational Permissive Mode: Not Supported 00:13:59.524 NVM Sets: Not Supported 00:13:59.524 Read Recovery Levels: Not Supported 00:13:59.524 Endurance Groups: Not Supported 00:13:59.524 Predictable Latency Mode: Not Supported 00:13:59.524 Traffic Based Keep ALive: Not Supported 00:13:59.524 Namespace Granularity: Not Supported 00:13:59.524 SQ Associations: Not Supported 00:13:59.524 UUID List: Not Supported 00:13:59.524 Multi-Domain Subsystem: Not Supported 00:13:59.524 Fixed Capacity Management: Not Supported 00:13:59.524 Variable Capacity Management: Not Supported 00:13:59.524 Delete Endurance Group: Not Supported 00:13:59.524 Delete NVM Set: Not Supported 00:13:59.524 Extended LBA Formats Supported: Not Supported 00:13:59.524 Flexible Data Placement Supported: Not Supported 00:13:59.524 00:13:59.524 Controller Memory Buffer Support 00:13:59.524 ================================ 00:13:59.524 Supported: No 00:13:59.524 00:13:59.524 Persistent Memory Region Support 00:13:59.524 ================================ 00:13:59.524 Supported: No 00:13:59.524 00:13:59.524 Admin Command Set Attributes 00:13:59.524 ============================ 00:13:59.524 Security Send/Receive: Not Supported 00:13:59.524 Format NVM: Not Supported 00:13:59.524 Firmware Activate/Download: Not Supported 00:13:59.524 Namespace Management: Not Supported 00:13:59.524 Device Self-Test: Not Supported 00:13:59.524 Directives: Not Supported 00:13:59.524 NVMe-MI: Not Supported 00:13:59.524 Virtualization Management: Not Supported 00:13:59.524 Doorbell Buffer Config: Not Supported 00:13:59.524 Get LBA Status Capability: Not Supported 00:13:59.524 Command & Feature Lockdown Capability: Not Supported 00:13:59.524 Abort Command Limit: 4 00:13:59.524 Async Event Request Limit: 4 00:13:59.524 Number of Firmware Slots: N/A 00:13:59.524 Firmware Slot 1 Read-Only: N/A 00:13:59.524 Firmware Activation Without Reset: [2024-11-17 13:23:48.671964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.524 [2024-11-17 13:23:48.671967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.524 [2024-11-17 13:23:48.671970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9b040) on tqpair=0x1d36750 00:13:59.524 [2024-11-17 13:23:48.671976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.524 [2024-11-17 13:23:48.671981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.524 [2024-11-17 13:23:48.671984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.524 [2024-11-17 13:23:48.671987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9b1c0) on tqpair=0x1d36750 00:13:59.524 N/A 00:13:59.524 Multiple Update Detection Support: N/A 00:13:59.524 Firmware Update Granularity: No Information Provided 00:13:59.524 Per-Namespace SMART Log: No 00:13:59.524 Asymmetric Namespace Access Log Page: Not Supported 00:13:59.524 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:59.524 Command Effects Log Page: Supported 00:13:59.524 Get Log Page Extended Data: Supported 00:13:59.524 Telemetry Log Pages: Not Supported 00:13:59.524 Persistent Event Log Pages: Not Supported 00:13:59.524 Supported Log Pages Log Page: May Support 00:13:59.524 Commands Supported & Effects Log Page: Not Supported 00:13:59.524 Feature Identifiers & Effects Log Page:May Support 00:13:59.524 NVMe-MI Commands & Effects Log Page: May Support 00:13:59.524 Data Area 4 for Telemetry Log: Not Supported 00:13:59.524 Error Log Page Entries Supported: 128 00:13:59.524 Keep Alive: Supported 00:13:59.524 Keep Alive Granularity: 10000 ms 00:13:59.524 00:13:59.524 NVM Command Set Attributes 00:13:59.524 ========================== 00:13:59.524 Submission Queue Entry Size 00:13:59.524 Max: 64 00:13:59.524 Min: 64 00:13:59.524 Completion Queue Entry Size 00:13:59.524 Max: 16 00:13:59.524 Min: 16 00:13:59.524 Number of Namespaces: 32 00:13:59.524 Compare Command: Supported 00:13:59.524 Write Uncorrectable Command: Not Supported 00:13:59.524 Dataset Management Command: Supported 00:13:59.524 Write Zeroes Command: Supported 00:13:59.524 Set Features Save Field: Not Supported 00:13:59.524 Reservations: Supported 00:13:59.524 Timestamp: Not Supported 00:13:59.524 Copy: Supported 00:13:59.524 Volatile Write Cache: Present 00:13:59.524 Atomic Write Unit (Normal): 1 00:13:59.524 Atomic Write Unit (PFail): 1 00:13:59.524 Atomic Compare & Write Unit: 1 00:13:59.524 Fused Compare & Write: Supported 00:13:59.524 Scatter-Gather List 00:13:59.524 SGL Command Set: Supported 00:13:59.524 SGL Keyed: Supported 00:13:59.524 SGL Bit Bucket Descriptor: Not Supported 00:13:59.524 SGL Metadata Pointer: Not Supported 00:13:59.524 Oversized SGL: Not Supported 00:13:59.524 SGL Metadata Address: Not Supported 00:13:59.524 SGL Offset: Supported 00:13:59.524 Transport SGL Data Block: Not Supported 00:13:59.524 Replay Protected Memory Block: Not Supported 00:13:59.524 00:13:59.524 Firmware Slot Information 00:13:59.524 ========================= 00:13:59.524 Active slot: 1 00:13:59.524 Slot 1 Firmware Revision: 25.01 00:13:59.524 00:13:59.524 00:13:59.524 Commands Supported and Effects 00:13:59.524 ============================== 00:13:59.524 Admin Commands 00:13:59.524 -------------- 00:13:59.524 Get Log Page (02h): Supported 00:13:59.524 Identify (06h): Supported 00:13:59.524 Abort (08h): Supported 00:13:59.524 Set Features (09h): Supported 00:13:59.524 Get Features (0Ah): Supported 00:13:59.524 Asynchronous Event Request (0Ch): Supported 00:13:59.524 Keep Alive (18h): Supported 00:13:59.524 I/O Commands 00:13:59.524 ------------ 00:13:59.524 Flush (00h): Supported LBA-Change 00:13:59.524 Write (01h): Supported LBA-Change 00:13:59.524 Read (02h): Supported 00:13:59.524 Compare (05h): Supported 00:13:59.524 Write Zeroes (08h): Supported LBA-Change 00:13:59.524 Dataset Management (09h): Supported LBA-Change 00:13:59.524 Copy (19h): Supported LBA-Change 00:13:59.524 00:13:59.524 Error Log 00:13:59.524 ========= 00:13:59.524 00:13:59.524 Arbitration 00:13:59.524 =========== 00:13:59.524 Arbitration Burst: 1 00:13:59.524 00:13:59.524 Power Management 00:13:59.524 ================ 00:13:59.524 Number of Power States: 1 00:13:59.524 Current Power State: Power State #0 00:13:59.524 Power State #0: 00:13:59.524 Max Power: 0.00 W 00:13:59.524 Non-Operational State: Operational 00:13:59.524 Entry Latency: Not Reported 00:13:59.524 Exit Latency: Not Reported 00:13:59.524 Relative Read Throughput: 0 00:13:59.524 Relative Read Latency: 0 00:13:59.524 Relative Write Throughput: 0 00:13:59.524 Relative Write Latency: 0 00:13:59.524 Idle Power: Not Reported 00:13:59.524 Active Power: Not Reported 00:13:59.524 Non-Operational Permissive Mode: Not Supported 00:13:59.524 00:13:59.524 Health Information 00:13:59.524 ================== 00:13:59.524 Critical Warnings: 00:13:59.524 Available Spare Space: OK 00:13:59.524 Temperature: OK 00:13:59.524 Device Reliability: OK 00:13:59.524 Read Only: No 00:13:59.524 Volatile Memory Backup: OK 00:13:59.524 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:59.524 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:59.524 Available Spare: 0% 00:13:59.524 Available Spare Threshold: 0% 00:13:59.524 Life Percentage Used:[2024-11-17 13:23:48.672075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.524 [2024-11-17 13:23:48.672082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d36750) 00:13:59.524 [2024-11-17 13:23:48.672088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.524 [2024-11-17 13:23:48.672109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9b1c0, cid 7, qid 0 00:13:59.524 [2024-11-17 13:23:48.672189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.524 [2024-11-17 13:23:48.672197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.524 [2024-11-17 13:23:48.672200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.524 [2024-11-17 13:23:48.672204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9b1c0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672240] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:13:59.525 [2024-11-17 13:23:48.672251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a740) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.525 [2024-11-17 13:23:48.672262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9a8c0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.525 [2024-11-17 13:23:48.672271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9aa40) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.525 [2024-11-17 13:23:48.672279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:59.525 [2024-11-17 13:23:48.672291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.672305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.672324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.672378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.672384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.672387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.672411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.672429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.672512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.672518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.672521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672529] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:13:59.525 [2024-11-17 13:23:48.672533] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:13:59.525 [2024-11-17 13:23:48.672556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.672570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.672585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.672631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.672637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.672640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.672666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.672680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.672726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.672732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.672735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.672747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.672754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.672760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.672774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.676815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.676831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.676835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.676839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.676851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.676856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.676859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d36750) 00:13:59.525 [2024-11-17 13:23:48.676866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:59.525 [2024-11-17 13:23:48.676887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d9abc0, cid 3, qid 0 00:13:59.525 [2024-11-17 13:23:48.676936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:59.525 [2024-11-17 13:23:48.676942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:59.525 [2024-11-17 13:23:48.676945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:59.525 [2024-11-17 13:23:48.676948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d9abc0) on tqpair=0x1d36750 00:13:59.525 [2024-11-17 13:23:48.676955] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:13:59.525 0% 00:13:59.525 Data Units Read: 0 00:13:59.525 Data Units Written: 0 00:13:59.525 Host Read Commands: 0 00:13:59.525 Host Write Commands: 0 00:13:59.525 Controller Busy Time: 0 minutes 00:13:59.525 Power Cycles: 0 00:13:59.525 Power On Hours: 0 hours 00:13:59.525 Unsafe Shutdowns: 0 00:13:59.525 Unrecoverable Media Errors: 0 00:13:59.525 Lifetime Error Log Entries: 0 00:13:59.525 Warning Temperature Time: 0 minutes 00:13:59.525 Critical Temperature Time: 0 minutes 00:13:59.525 00:13:59.525 Number of Queues 00:13:59.525 ================ 00:13:59.525 Number of I/O Submission Queues: 127 00:13:59.525 Number of I/O Completion Queues: 127 00:13:59.525 00:13:59.525 Active Namespaces 00:13:59.525 ================= 00:13:59.525 Namespace ID:1 00:13:59.525 Error Recovery Timeout: Unlimited 00:13:59.525 Command Set Identifier: NVM (00h) 00:13:59.525 Deallocate: Supported 00:13:59.525 Deallocated/Unwritten Error: Not Supported 00:13:59.525 Deallocated Read Value: Unknown 00:13:59.525 Deallocate in Write Zeroes: Not Supported 00:13:59.525 Deallocated Guard Field: 0xFFFF 00:13:59.525 Flush: Supported 00:13:59.525 Reservation: Supported 00:13:59.525 Namespace Sharing Capabilities: Multiple Controllers 00:13:59.525 Size (in LBAs): 131072 (0GiB) 00:13:59.525 Capacity (in LBAs): 131072 (0GiB) 00:13:59.525 Utilization (in LBAs): 131072 (0GiB) 00:13:59.525 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:59.525 EUI64: ABCDEF0123456789 00:13:59.525 UUID: 53ce54e5-43a5-420a-8a17-fbbdd6ffbfe1 00:13:59.525 Thin Provisioning: Not Supported 00:13:59.525 Per-NS Atomic Units: Yes 00:13:59.525 Atomic Boundary Size (Normal): 0 00:13:59.526 Atomic Boundary Size (PFail): 0 00:13:59.526 Atomic Boundary Offset: 0 00:13:59.526 Maximum Single Source Range Length: 65535 00:13:59.526 Maximum Copy Length: 65535 00:13:59.526 Maximum Source Range Count: 1 00:13:59.526 NGUID/EUI64 Never Reused: No 00:13:59.526 Namespace Write Protected: No 00:13:59.526 Number of LBA Formats: 1 00:13:59.526 Current LBA Format: LBA Format #00 00:13:59.526 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:59.526 00:13:59.526 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.785 rmmod nvme_tcp 00:13:59.785 rmmod nvme_fabrics 00:13:59.785 rmmod nvme_keyring 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73923 ']' 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73923 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73923 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.785 killing process with pid 73923 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73923 00:13:59.785 13:23:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73923 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:00.045 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:00.304 00:14:00.304 real 0m2.394s 00:14:00.304 user 0m4.953s 00:14:00.304 sys 0m0.806s 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:00.304 ************************************ 00:14:00.304 END TEST nvmf_identify 00:14:00.304 ************************************ 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:00.304 ************************************ 00:14:00.304 START TEST nvmf_perf 00:14:00.304 ************************************ 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:00.304 * Looking for test storage... 00:14:00.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.304 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.565 --rc genhtml_branch_coverage=1 00:14:00.565 --rc genhtml_function_coverage=1 00:14:00.565 --rc genhtml_legend=1 00:14:00.565 --rc geninfo_all_blocks=1 00:14:00.565 --rc geninfo_unexecuted_blocks=1 00:14:00.565 00:14:00.565 ' 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.565 --rc genhtml_branch_coverage=1 00:14:00.565 --rc genhtml_function_coverage=1 00:14:00.565 --rc genhtml_legend=1 00:14:00.565 --rc geninfo_all_blocks=1 00:14:00.565 --rc geninfo_unexecuted_blocks=1 00:14:00.565 00:14:00.565 ' 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.565 --rc genhtml_branch_coverage=1 00:14:00.565 --rc genhtml_function_coverage=1 00:14:00.565 --rc genhtml_legend=1 00:14:00.565 --rc geninfo_all_blocks=1 00:14:00.565 --rc geninfo_unexecuted_blocks=1 00:14:00.565 00:14:00.565 ' 00:14:00.565 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.565 --rc genhtml_branch_coverage=1 00:14:00.565 --rc genhtml_function_coverage=1 00:14:00.565 --rc genhtml_legend=1 00:14:00.565 --rc geninfo_all_blocks=1 00:14:00.565 --rc geninfo_unexecuted_blocks=1 00:14:00.565 00:14:00.565 ' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:00.566 Cannot find device "nvmf_init_br" 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:00.566 Cannot find device "nvmf_init_br2" 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:00.566 Cannot find device "nvmf_tgt_br" 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.566 Cannot find device "nvmf_tgt_br2" 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:00.566 Cannot find device "nvmf_init_br" 00:14:00.566 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:00.567 Cannot find device "nvmf_init_br2" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:00.567 Cannot find device "nvmf_tgt_br" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:00.567 Cannot find device "nvmf_tgt_br2" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:00.567 Cannot find device "nvmf_br" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:00.567 Cannot find device "nvmf_init_if" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:00.567 Cannot find device "nvmf_init_if2" 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.567 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:00.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:00.827 00:14:00.827 --- 10.0.0.3 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:00.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:14:00.827 00:14:00.827 --- 10.0.0.4 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:00.827 00:14:00.827 --- 10.0.0.1 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:00.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:00.827 00:14:00.827 --- 10.0.0.2 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.827 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74172 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74172 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74172 ']' 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.827 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.828 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.828 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:01.087 [2024-11-17 13:23:50.066264] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:01.087 [2024-11-17 13:23:50.066319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.087 [2024-11-17 13:23:50.202855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.087 [2024-11-17 13:23:50.252557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.087 [2024-11-17 13:23:50.252624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.087 [2024-11-17 13:23:50.252634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.087 [2024-11-17 13:23:50.252648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.087 [2024-11-17 13:23:50.252655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.087 [2024-11-17 13:23:50.253907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.087 [2024-11-17 13:23:50.254006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.087 [2024-11-17 13:23:50.254178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.087 [2024-11-17 13:23:50.254551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.347 [2024-11-17 13:23:50.327190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:01.347 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:01.915 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:01.915 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:02.174 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:02.174 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:02.433 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:02.433 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:02.433 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:02.434 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:02.434 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:02.693 [2024-11-17 13:23:51.796098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.693 13:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.951 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:02.952 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.210 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:03.210 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:03.468 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:03.726 [2024-11-17 13:23:52.789955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:03.726 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:03.985 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:03.985 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:03.985 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:03.985 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:04.921 Initializing NVMe Controllers 00:14:04.921 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:04.921 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:04.921 Initialization complete. Launching workers. 00:14:04.921 ======================================================== 00:14:04.921 Latency(us) 00:14:04.921 Device Information : IOPS MiB/s Average min max 00:14:04.921 PCIE (0000:00:10.0) NSID 1 from core 0: 21984.00 85.88 1455.04 383.16 8778.32 00:14:04.921 ======================================================== 00:14:04.921 Total : 21984.00 85.88 1455.04 383.16 8778.32 00:14:04.921 00:14:04.921 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:06.300 Initializing NVMe Controllers 00:14:06.300 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:06.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:06.300 Initialization complete. Launching workers. 00:14:06.300 ======================================================== 00:14:06.300 Latency(us) 00:14:06.300 Device Information : IOPS MiB/s Average min max 00:14:06.300 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3313.00 12.94 301.47 103.31 7153.88 00:14:06.300 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8178.39 6971.35 12020.61 00:14:06.300 ======================================================== 00:14:06.300 Total : 3436.00 13.42 583.44 103.31 12020.61 00:14:06.300 00:14:06.300 13:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:07.679 Initializing NVMe Controllers 00:14:07.679 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.679 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:07.679 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:07.679 Initialization complete. Launching workers. 00:14:07.679 ======================================================== 00:14:07.680 Latency(us) 00:14:07.680 Device Information : IOPS MiB/s Average min max 00:14:07.680 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9081.56 35.47 3522.42 614.00 9295.95 00:14:07.680 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3740.82 14.61 8554.60 5806.72 15513.17 00:14:07.680 ======================================================== 00:14:07.680 Total : 12822.38 50.09 4990.51 614.00 15513.17 00:14:07.680 00:14:07.939 13:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:07.939 13:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:10.522 Initializing NVMe Controllers 00:14:10.522 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.522 Controller IO queue size 128, less than required. 00:14:10.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.522 Controller IO queue size 128, less than required. 00:14:10.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.522 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:10.522 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:10.522 Initialization complete. Launching workers. 00:14:10.522 ======================================================== 00:14:10.522 Latency(us) 00:14:10.522 Device Information : IOPS MiB/s Average min max 00:14:10.522 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2036.32 509.08 63684.95 33724.33 111106.01 00:14:10.522 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.94 168.24 195880.94 68122.13 317756.14 00:14:10.522 ======================================================== 00:14:10.522 Total : 2709.26 677.32 96520.49 33724.33 317756.14 00:14:10.522 00:14:10.522 13:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:10.522 Initializing NVMe Controllers 00:14:10.522 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.522 Controller IO queue size 128, less than required. 00:14:10.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.522 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:10.522 Controller IO queue size 128, less than required. 00:14:10.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.522 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:10.522 WARNING: Some requested NVMe devices were skipped 00:14:10.522 No valid NVMe controllers or AIO or URING devices found 00:14:10.522 13:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:13.059 Initializing NVMe Controllers 00:14:13.059 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.059 Controller IO queue size 128, less than required. 00:14:13.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:13.059 Controller IO queue size 128, less than required. 00:14:13.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:13.059 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.059 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:13.059 Initialization complete. Launching workers. 00:14:13.059 00:14:13.059 ==================== 00:14:13.059 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:13.059 TCP transport: 00:14:13.059 polls: 10195 00:14:13.059 idle_polls: 6930 00:14:13.059 sock_completions: 3265 00:14:13.059 nvme_completions: 5693 00:14:13.059 submitted_requests: 8496 00:14:13.060 queued_requests: 1 00:14:13.060 00:14:13.060 ==================== 00:14:13.060 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:13.060 TCP transport: 00:14:13.060 polls: 13125 00:14:13.060 idle_polls: 9344 00:14:13.060 sock_completions: 3781 00:14:13.060 nvme_completions: 6173 00:14:13.060 submitted_requests: 9284 00:14:13.060 queued_requests: 1 00:14:13.060 ======================================================== 00:14:13.060 Latency(us) 00:14:13.060 Device Information : IOPS MiB/s Average min max 00:14:13.060 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1419.97 354.99 92468.08 46028.83 155717.36 00:14:13.060 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1539.72 384.93 83168.10 38841.00 134221.12 00:14:13.060 ======================================================== 00:14:13.060 Total : 2959.69 739.92 87629.96 38841.00 155717.36 00:14:13.060 00:14:13.060 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:13.319 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.579 rmmod nvme_tcp 00:14:13.579 rmmod nvme_fabrics 00:14:13.579 rmmod nvme_keyring 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74172 ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74172 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74172 ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74172 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74172 00:14:13.579 killing process with pid 74172 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74172' 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74172 00:14:13.579 13:24:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74172 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:14.148 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:14.407 00:14:14.407 real 0m14.066s 00:14:14.407 user 0m50.633s 00:14:14.407 sys 0m3.992s 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:14.407 ************************************ 00:14:14.407 END TEST nvmf_perf 00:14:14.407 ************************************ 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:14.407 ************************************ 00:14:14.407 START TEST nvmf_fio_host 00:14:14.407 ************************************ 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:14.407 * Looking for test storage... 00:14:14.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:14.407 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:14.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.668 --rc genhtml_branch_coverage=1 00:14:14.668 --rc genhtml_function_coverage=1 00:14:14.668 --rc genhtml_legend=1 00:14:14.668 --rc geninfo_all_blocks=1 00:14:14.668 --rc geninfo_unexecuted_blocks=1 00:14:14.668 00:14:14.668 ' 00:14:14.668 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:14.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.668 --rc genhtml_branch_coverage=1 00:14:14.668 --rc genhtml_function_coverage=1 00:14:14.668 --rc genhtml_legend=1 00:14:14.668 --rc geninfo_all_blocks=1 00:14:14.668 --rc geninfo_unexecuted_blocks=1 00:14:14.668 00:14:14.668 ' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.669 --rc genhtml_branch_coverage=1 00:14:14.669 --rc genhtml_function_coverage=1 00:14:14.669 --rc genhtml_legend=1 00:14:14.669 --rc geninfo_all_blocks=1 00:14:14.669 --rc geninfo_unexecuted_blocks=1 00:14:14.669 00:14:14.669 ' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:14.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.669 --rc genhtml_branch_coverage=1 00:14:14.669 --rc genhtml_function_coverage=1 00:14:14.669 --rc genhtml_legend=1 00:14:14.669 --rc geninfo_all_blocks=1 00:14:14.669 --rc geninfo_unexecuted_blocks=1 00:14:14.669 00:14:14.669 ' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.669 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:14.670 Cannot find device "nvmf_init_br" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:14.670 Cannot find device "nvmf_init_br2" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:14.670 Cannot find device "nvmf_tgt_br" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.670 Cannot find device "nvmf_tgt_br2" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:14.670 Cannot find device "nvmf_init_br" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:14.670 Cannot find device "nvmf_init_br2" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:14.670 Cannot find device "nvmf_tgt_br" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:14.670 Cannot find device "nvmf_tgt_br2" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:14.670 Cannot find device "nvmf_br" 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:14.670 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:14.929 Cannot find device "nvmf_init_if" 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:14.929 Cannot find device "nvmf_init_if2" 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.929 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:14.930 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:14.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:14.930 00:14:14.930 --- 10.0.0.3 ping statistics --- 00:14:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.930 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:14.930 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:14.930 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:14:14.930 00:14:14.930 --- 10.0.0.4 ping statistics --- 00:14:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.930 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:14.930 00:14:14.930 --- 10.0.0.1 ping statistics --- 00:14:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.930 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:14.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:14:14.930 00:14:14.930 --- 10.0.0.2 ping statistics --- 00:14:14.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.930 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74631 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74631 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74631 ']' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.930 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:15.190 [2024-11-17 13:24:04.203106] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:15.190 [2024-11-17 13:24:04.203183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.190 [2024-11-17 13:24:04.346798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.190 [2024-11-17 13:24:04.389729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.190 [2024-11-17 13:24:04.389794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.190 [2024-11-17 13:24:04.389820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.190 [2024-11-17 13:24:04.389828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.190 [2024-11-17 13:24:04.389834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.190 [2024-11-17 13:24:04.390954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.190 [2024-11-17 13:24:04.391088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.190 [2024-11-17 13:24:04.391170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.190 [2024-11-17 13:24:04.391173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.449 [2024-11-17 13:24:04.443254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.449 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.449 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:14:15.449 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.708 [2024-11-17 13:24:04.797639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.708 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:15.708 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.708 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:15.708 13:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.967 Malloc1 00:14:16.226 13:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:16.485 13:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.744 13:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:17.003 [2024-11-17 13:24:06.057882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.003 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:17.263 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:17.263 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:17.263 fio-3.35 00:14:17.263 Starting 1 thread 00:14:19.800 00:14:19.800 test: (groupid=0, jobs=1): err= 0: pid=74706: Sun Nov 17 13:24:08 2024 00:14:19.800 read: IOPS=9227, BW=36.0MiB/s (37.8MB/s)(72.3MiB/2007msec) 00:14:19.800 slat (nsec): min=1707, max=282337, avg=2253.21, stdev=3017.24 00:14:19.800 clat (usec): min=2243, max=12864, avg=7225.49, stdev=550.82 00:14:19.800 lat (usec): min=2293, max=12866, avg=7227.74, stdev=550.66 00:14:19.800 clat percentiles (usec): 00:14:19.800 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6849], 00:14:19.800 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7308], 00:14:19.800 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:14:19.800 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[11600], 00:14:19.800 | 99.99th=[12649] 00:14:19.800 bw ( KiB/s): min=36192, max=37472, per=100.00%, avg=36910.00, stdev=624.49, samples=4 00:14:19.800 iops : min= 9048, max= 9368, avg=9227.50, stdev=156.12, samples=4 00:14:19.800 write: IOPS=9231, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2007msec); 0 zone resets 00:14:19.800 slat (nsec): min=1772, max=235622, avg=2311.66, stdev=2329.41 00:14:19.800 clat (usec): min=2111, max=12580, avg=6570.87, stdev=505.83 00:14:19.800 lat (usec): min=2123, max=12582, avg=6573.19, stdev=505.80 00:14:19.800 clat percentiles (usec): 00:14:19.800 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6194], 00:14:19.800 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:14:19.800 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:14:19.800 | 99.00th=[ 8029], 99.50th=[ 8356], 99.90th=[10552], 99.95th=[11600], 00:14:19.800 | 99.99th=[12518] 00:14:19.800 bw ( KiB/s): min=36416, max=37440, per=100.00%, avg=36930.00, stdev=447.64, samples=4 00:14:19.800 iops : min= 9104, max= 9360, avg=9232.50, stdev=111.91, samples=4 00:14:19.800 lat (msec) : 4=0.12%, 10=99.69%, 20=0.18% 00:14:19.800 cpu : usr=73.08%, sys=20.49%, ctx=14, majf=0, minf=7 00:14:19.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:19.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.800 issued rwts: total=18519,18528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.800 00:14:19.800 Run status group 0 (all jobs): 00:14:19.800 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.3MiB (75.9MB), run=2007-2007msec 00:14:19.800 WRITE: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2007-2007msec 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:19.800 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:19.801 13:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:19.801 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:19.801 fio-3.35 00:14:19.801 Starting 1 thread 00:14:22.337 00:14:22.337 test: (groupid=0, jobs=1): err= 0: pid=74755: Sun Nov 17 13:24:11 2024 00:14:22.337 read: IOPS=8976, BW=140MiB/s (147MB/s)(281MiB/2005msec) 00:14:22.337 slat (usec): min=2, max=113, avg= 3.36, stdev= 2.31 00:14:22.337 clat (usec): min=1921, max=16268, avg=8055.90, stdev=2453.46 00:14:22.337 lat (usec): min=1924, max=16271, avg=8059.25, stdev=2453.58 00:14:22.337 clat percentiles (usec): 00:14:22.337 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5932], 00:14:22.337 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:14:22.337 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11469], 95.00th=[12780], 00:14:22.337 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15664], 99.95th=[15664], 00:14:22.337 | 99.99th=[15795] 00:14:22.337 bw ( KiB/s): min=63136, max=77632, per=49.24%, avg=70728.00, stdev=6115.40, samples=4 00:14:22.337 iops : min= 3946, max= 4852, avg=4420.50, stdev=382.21, samples=4 00:14:22.337 write: IOPS=5085, BW=79.5MiB/s (83.3MB/s)(144MiB/1811msec); 0 zone resets 00:14:22.337 slat (usec): min=29, max=411, avg=34.81, stdev=10.40 00:14:22.337 clat (usec): min=3105, max=19730, avg=11201.71, stdev=2240.95 00:14:22.337 lat (usec): min=3135, max=19777, avg=11236.52, stdev=2243.90 00:14:22.337 clat percentiles (usec): 00:14:22.337 | 1.00th=[ 6783], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:14:22.337 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[11469], 00:14:22.337 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14353], 95.00th=[15401], 00:14:22.337 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18744], 99.95th=[19268], 00:14:22.337 | 99.99th=[19792] 00:14:22.337 bw ( KiB/s): min=66176, max=79936, per=89.94%, avg=73184.00, stdev=5813.44, samples=4 00:14:22.337 iops : min= 4136, max= 4996, avg=4574.00, stdev=363.34, samples=4 00:14:22.337 lat (msec) : 2=0.01%, 4=1.06%, 10=63.10%, 20=35.83% 00:14:22.337 cpu : usr=79.94%, sys=15.37%, ctx=10, majf=0, minf=12 00:14:22.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:22.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:22.337 issued rwts: total=17998,9210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:22.337 00:14:22.337 Run status group 0 (all jobs): 00:14:22.337 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (295MB), run=2005-2005msec 00:14:22.337 WRITE: bw=79.5MiB/s (83.3MB/s), 79.5MiB/s-79.5MiB/s (83.3MB/s-83.3MB/s), io=144MiB (151MB), run=1811-1811msec 00:14:22.337 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:22.596 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.597 rmmod nvme_tcp 00:14:22.597 rmmod nvme_fabrics 00:14:22.597 rmmod nvme_keyring 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74631 ']' 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74631 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74631 ']' 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74631 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74631 00:14:22.597 killing process with pid 74631 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74631' 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74631 00:14:22.597 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74631 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:22.856 13:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.856 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:23.115 00:14:23.115 real 0m8.704s 00:14:23.115 user 0m34.839s 00:14:23.115 sys 0m2.336s 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.115 ************************************ 00:14:23.115 END TEST nvmf_fio_host 00:14:23.115 ************************************ 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.115 ************************************ 00:14:23.115 START TEST nvmf_failover 00:14:23.115 ************************************ 00:14:23.115 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:23.374 * Looking for test storage... 00:14:23.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.374 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:23.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.375 --rc genhtml_branch_coverage=1 00:14:23.375 --rc genhtml_function_coverage=1 00:14:23.375 --rc genhtml_legend=1 00:14:23.375 --rc geninfo_all_blocks=1 00:14:23.375 --rc geninfo_unexecuted_blocks=1 00:14:23.375 00:14:23.375 ' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:23.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.375 --rc genhtml_branch_coverage=1 00:14:23.375 --rc genhtml_function_coverage=1 00:14:23.375 --rc genhtml_legend=1 00:14:23.375 --rc geninfo_all_blocks=1 00:14:23.375 --rc geninfo_unexecuted_blocks=1 00:14:23.375 00:14:23.375 ' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:23.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.375 --rc genhtml_branch_coverage=1 00:14:23.375 --rc genhtml_function_coverage=1 00:14:23.375 --rc genhtml_legend=1 00:14:23.375 --rc geninfo_all_blocks=1 00:14:23.375 --rc geninfo_unexecuted_blocks=1 00:14:23.375 00:14:23.375 ' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:23.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.375 --rc genhtml_branch_coverage=1 00:14:23.375 --rc genhtml_function_coverage=1 00:14:23.375 --rc genhtml_legend=1 00:14:23.375 --rc geninfo_all_blocks=1 00:14:23.375 --rc geninfo_unexecuted_blocks=1 00:14:23.375 00:14:23.375 ' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.375 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.375 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:23.376 Cannot find device "nvmf_init_br" 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:23.376 Cannot find device "nvmf_init_br2" 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:23.376 Cannot find device "nvmf_tgt_br" 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.376 Cannot find device "nvmf_tgt_br2" 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:23.376 Cannot find device "nvmf_init_br" 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:14:23.376 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:23.635 Cannot find device "nvmf_init_br2" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:23.635 Cannot find device "nvmf_tgt_br" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:23.635 Cannot find device "nvmf_tgt_br2" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:23.635 Cannot find device "nvmf_br" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:23.635 Cannot find device "nvmf_init_if" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:23.635 Cannot find device "nvmf_init_if2" 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.635 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:23.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:14:23.895 00:14:23.895 --- 10.0.0.3 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:23.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:23.895 00:14:23.895 --- 10.0.0.4 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:23.895 00:14:23.895 --- 10.0.0.1 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:23.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:23.895 00:14:23.895 --- 10.0.0.2 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75022 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75022 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75022 ']' 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.895 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 [2024-11-17 13:24:12.974507] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:23.895 [2024-11-17 13:24:12.974605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.154 [2024-11-17 13:24:13.127730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.154 [2024-11-17 13:24:13.179896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.154 [2024-11-17 13:24:13.179976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.154 [2024-11-17 13:24:13.179991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.154 [2024-11-17 13:24:13.180001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.154 [2024-11-17 13:24:13.180011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.155 [2024-11-17 13:24:13.181331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.155 [2024-11-17 13:24:13.181461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.155 [2024-11-17 13:24:13.181469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.155 [2024-11-17 13:24:13.246814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.723 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.723 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:24.723 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:24.723 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:24.723 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:24.982 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.982 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:25.242 [2024-11-17 13:24:14.218463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.242 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:25.501 Malloc0 00:14:25.501 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:25.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:25.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:26.018 [2024-11-17 13:24:15.153733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.018 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:26.277 [2024-11-17 13:24:15.365952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:26.277 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:26.536 [2024-11-17 13:24:15.570177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75074 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75074 /var/tmp/bdevperf.sock 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75074 ']' 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.536 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:27.473 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.473 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:27.473 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:27.732 NVMe0n1 00:14:27.732 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:27.991 00:14:27.991 13:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75103 00:14:27.991 13:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:27.991 13:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:28.927 13:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:29.186 [2024-11-17 13:24:18.389730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75acf0 is same with the state(6) to be set 00:14:29.186 [2024-11-17 13:24:18.389803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75acf0 is same with the state(6) to be set 00:14:29.186 [2024-11-17 13:24:18.389813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75acf0 is same with the state(6) to be set 00:14:29.186 [2024-11-17 13:24:18.389821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75acf0 is same with the state(6) to be set 00:14:29.186 [2024-11-17 13:24:18.389829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75acf0 is same with the state(6) to be set 00:14:29.186 13:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:32.474 13:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:32.733 00:14:32.733 13:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:32.992 13:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:36.281 13:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:36.281 [2024-11-17 13:24:25.288998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:36.281 13:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:37.217 13:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:37.476 [2024-11-17 13:24:26.560136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x896760 is same with the state(6) to be set 00:14:37.476 [2024-11-17 13:24:26.560242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x896760 is same with the state(6) to be set 00:14:37.476 [2024-11-17 13:24:26.560266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x896760 is same with the state(6) to be set 00:14:37.476 13:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75103 00:14:44.048 { 00:14:44.048 "results": [ 00:14:44.048 { 00:14:44.048 "job": "NVMe0n1", 00:14:44.048 "core_mask": "0x1", 00:14:44.048 "workload": "verify", 00:14:44.048 "status": "finished", 00:14:44.048 "verify_range": { 00:14:44.048 "start": 0, 00:14:44.048 "length": 16384 00:14:44.048 }, 00:14:44.048 "queue_depth": 128, 00:14:44.048 "io_size": 4096, 00:14:44.048 "runtime": 15.007607, 00:14:44.048 "iops": 9959.482547750617, 00:14:44.048 "mibps": 38.904228702150846, 00:14:44.048 "io_failed": 3709, 00:14:44.048 "io_timeout": 0, 00:14:44.048 "avg_latency_us": 12514.540769009353, 00:14:44.048 "min_latency_us": 573.44, 00:14:44.048 "max_latency_us": 13643.403636363637 00:14:44.048 } 00:14:44.048 ], 00:14:44.048 "core_count": 1 00:14:44.048 } 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75074 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75074 ']' 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75074 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75074 00:14:44.048 killing process with pid 75074 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75074' 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75074 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75074 00:14:44.048 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:44.048 [2024-11-17 13:24:15.623444] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:44.049 [2024-11-17 13:24:15.623534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75074 ] 00:14:44.049 [2024-11-17 13:24:15.757604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.049 [2024-11-17 13:24:15.804634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.049 [2024-11-17 13:24:15.860189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.049 Running I/O for 15 seconds... 00:14:44.049 10439.00 IOPS, 40.78 MiB/s [2024-11-17T13:24:33.273Z] [2024-11-17 13:24:18.390108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.049 [2024-11-17 13:24:18.390734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.390975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.390989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.049 [2024-11-17 13:24:18.391183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.049 [2024-11-17 13:24:18.391197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.391666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.391976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.391990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.392002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.392041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.392069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.392095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.050 [2024-11-17 13:24:18.392121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.050 [2024-11-17 13:24:18.392310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.050 [2024-11-17 13:24:18.392323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.392348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.051 [2024-11-17 13:24:18.392892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.392925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.392959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.392989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.051 [2024-11-17 13:24:18.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57c060 is same with the state(6) to be set 00:14:44.051 [2024-11-17 13:24:18.393386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.051 [2024-11-17 13:24:18.393397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.051 [2024-11-17 13:24:18.393429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:14:44.051 [2024-11-17 13:24:18.393442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.051 [2024-11-17 13:24:18.393456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.051 [2024-11-17 13:24:18.393466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.051 [2024-11-17 13:24:18.393475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:14:44.051 [2024-11-17 13:24:18.393487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.393966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.393975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.393984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.393995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.394017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.394026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.394037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.052 [2024-11-17 13:24:18.394067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.052 [2024-11-17 13:24:18.394078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:14:44.052 [2024-11-17 13:24:18.394090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394146] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:44.052 [2024-11-17 13:24:18.394201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.052 [2024-11-17 13:24:18.394222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.052 [2024-11-17 13:24:18.394248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.052 [2024-11-17 13:24:18.394282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.052 [2024-11-17 13:24:18.394307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:18.394319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:14:44.052 [2024-11-17 13:24:18.397564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:44.052 [2024-11-17 13:24:18.397600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4df710 (9): Bad file descriptor 00:14:44.052 [2024-11-17 13:24:18.422155] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:14:44.052 9998.50 IOPS, 39.06 MiB/s [2024-11-17T13:24:33.276Z] 10021.33 IOPS, 39.15 MiB/s [2024-11-17T13:24:33.276Z] 10140.00 IOPS, 39.61 MiB/s [2024-11-17T13:24:33.276Z] [2024-11-17 13:24:22.013310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.052 [2024-11-17 13:24:22.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.052 [2024-11-17 13:24:22.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.013704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.013756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.013982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.013994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.053 [2024-11-17 13:24:22.014526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.053 [2024-11-17 13:24:22.014598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.053 [2024-11-17 13:24:22.014611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.014974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.014987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.054 [2024-11-17 13:24:22.015234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.054 [2024-11-17 13:24:22.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.054 [2024-11-17 13:24:22.015701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.015728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.015966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.015985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.055 [2024-11-17 13:24:22.016484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.055 [2024-11-17 13:24:22.016805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.055 [2024-11-17 13:24:22.016819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:22.016832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.016846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:22.016859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:22.016885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.016899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:22.016911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.016961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.056 [2024-11-17 13:24:22.016976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.056 [2024-11-17 13:24:22.016986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3368 len:8 PRP1 0x0 PRP2 0x0 00:14:44.056 [2024-11-17 13:24:22.016999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.017057] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:14:44.056 [2024-11-17 13:24:22.017110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.056 [2024-11-17 13:24:22.017131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.017145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.056 [2024-11-17 13:24:22.017179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.017192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.056 [2024-11-17 13:24:22.017214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.017229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.056 [2024-11-17 13:24:22.017241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:22.017253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:14:44.056 [2024-11-17 13:24:22.020500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:14:44.056 [2024-11-17 13:24:22.020548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4df710 (9): Bad file descriptor 00:14:44.056 [2024-11-17 13:24:22.051950] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:14:44.056 10091.20 IOPS, 39.42 MiB/s [2024-11-17T13:24:33.280Z] 10149.33 IOPS, 39.65 MiB/s [2024-11-17T13:24:33.280Z] 10204.00 IOPS, 39.86 MiB/s [2024-11-17T13:24:33.280Z] 10245.00 IOPS, 40.02 MiB/s [2024-11-17T13:24:33.280Z] 10276.44 IOPS, 40.14 MiB/s [2024-11-17T13:24:33.280Z] [2024-11-17 13:24:26.560388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.560976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.560988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.056 [2024-11-17 13:24:26.561220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.561245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.561270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.056 [2024-11-17 13:24:26.561296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.056 [2024-11-17 13:24:26.561309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.057 [2024-11-17 13:24:26.561867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.561985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.561997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.057 [2024-11-17 13:24:26.562220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.057 [2024-11-17 13:24:26.562232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.058 [2024-11-17 13:24:26.562946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.562985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.562997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.058 [2024-11-17 13:24:26.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.058 [2024-11-17 13:24:26.563333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.059 [2024-11-17 13:24:26.563598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.563982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.563995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:44.059 [2024-11-17 13:24:26.564008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:44.059 [2024-11-17 13:24:26.564071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:44.059 [2024-11-17 13:24:26.564081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:744 len:8 PRP1 0x0 PRP2 0x0 00:14:44.059 [2024-11-17 13:24:26.564093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564149] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:14:44.059 [2024-11-17 13:24:26.564246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.059 [2024-11-17 13:24:26.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.059 [2024-11-17 13:24:26.564294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.059 [2024-11-17 13:24:26.564319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.059 [2024-11-17 13:24:26.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.059 [2024-11-17 13:24:26.564357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:14:44.059 [2024-11-17 13:24:26.564402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4df710 (9): Bad file descriptor 00:14:44.059 [2024-11-17 13:24:26.567631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:14:44.059 [2024-11-17 13:24:26.589885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:14:44.059 10195.10 IOPS, 39.82 MiB/s [2024-11-17T13:24:33.283Z] 10136.91 IOPS, 39.60 MiB/s [2024-11-17T13:24:33.283Z] 10085.25 IOPS, 39.40 MiB/s [2024-11-17T13:24:33.283Z] 10038.08 IOPS, 39.21 MiB/s [2024-11-17T13:24:33.283Z] 9996.50 IOPS, 39.05 MiB/s [2024-11-17T13:24:33.283Z] 9957.07 IOPS, 38.89 MiB/s 00:14:44.059 Latency(us) 00:14:44.059 [2024-11-17T13:24:33.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:44.059 Verification LBA range: start 0x0 length 0x4000 00:14:44.059 NVMe0n1 : 15.01 9959.48 38.90 247.14 0.00 12514.54 573.44 13643.40 00:14:44.059 [2024-11-17T13:24:33.283Z] =================================================================================================================== 00:14:44.059 [2024-11-17T13:24:33.283Z] Total : 9959.48 38.90 247.14 0.00 12514.54 573.44 13643.40 00:14:44.059 Received shutdown signal, test time was about 15.000000 seconds 00:14:44.059 00:14:44.059 Latency(us) 00:14:44.059 [2024-11-17T13:24:33.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.059 [2024-11-17T13:24:33.283Z] =================================================================================================================== 00:14:44.059 [2024-11-17T13:24:33.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.059 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:44.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.059 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75277 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75277 /var/tmp/bdevperf.sock 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75277 ']' 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:44.060 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:44.060 [2024-11-17 13:24:33.148204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:44.060 13:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:44.319 [2024-11-17 13:24:33.448593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:44.319 13:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:44.578 NVMe0n1 00:14:44.578 13:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:44.837 00:14:44.837 13:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:45.413 00:14:45.413 13:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:45.413 13:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:45.690 13:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:45.960 13:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:49.249 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:49.249 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:49.249 13:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.249 13:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75346 00:14:49.249 13:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75346 00:14:50.184 { 00:14:50.184 "results": [ 00:14:50.184 { 00:14:50.184 "job": "NVMe0n1", 00:14:50.184 "core_mask": "0x1", 00:14:50.184 "workload": "verify", 00:14:50.184 "status": "finished", 00:14:50.184 "verify_range": { 00:14:50.184 "start": 0, 00:14:50.184 "length": 16384 00:14:50.184 }, 00:14:50.184 "queue_depth": 128, 00:14:50.184 "io_size": 4096, 00:14:50.184 "runtime": 1.003852, 00:14:50.184 "iops": 7501.105740686874, 00:14:50.184 "mibps": 29.3011942995581, 00:14:50.184 "io_failed": 0, 00:14:50.184 "io_timeout": 0, 00:14:50.184 "avg_latency_us": 16999.55764046843, 00:14:50.184 "min_latency_us": 997.9345454545454, 00:14:50.184 "max_latency_us": 16801.04727272727 00:14:50.184 } 00:14:50.184 ], 00:14:50.184 "core_count": 1 00:14:50.184 } 00:14:50.184 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:50.184 [2024-11-17 13:24:32.579619] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:50.184 [2024-11-17 13:24:32.579728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75277 ] 00:14:50.184 [2024-11-17 13:24:32.724828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.184 [2024-11-17 13:24:32.769155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.184 [2024-11-17 13:24:32.822228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.184 [2024-11-17 13:24:34.894408] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:50.184 [2024-11-17 13:24:34.894522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.184 [2024-11-17 13:24:34.894547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.184 [2024-11-17 13:24:34.894563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.184 [2024-11-17 13:24:34.894576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.184 [2024-11-17 13:24:34.894588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.184 [2024-11-17 13:24:34.894600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.184 [2024-11-17 13:24:34.894613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.184 [2024-11-17 13:24:34.894625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.184 [2024-11-17 13:24:34.894637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:50.184 [2024-11-17 13:24:34.894685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:50.184 [2024-11-17 13:24:34.894714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x959710 (9): Bad file descriptor 00:14:50.184 [2024-11-17 13:24:34.897090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:50.184 Running I/O for 1 seconds... 00:14:50.184 7402.00 IOPS, 28.91 MiB/s 00:14:50.184 Latency(us) 00:14:50.184 [2024-11-17T13:24:39.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.184 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.184 Verification LBA range: start 0x0 length 0x4000 00:14:50.184 NVMe0n1 : 1.00 7501.11 29.30 0.00 0.00 16999.56 997.93 16801.05 00:14:50.184 [2024-11-17T13:24:39.408Z] =================================================================================================================== 00:14:50.184 [2024-11-17T13:24:39.408Z] Total : 7501.11 29.30 0.00 0.00 16999.56 997.93 16801.05 00:14:50.184 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.184 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:50.443 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.702 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:50.702 13:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.960 13:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.307 13:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75277 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75277 ']' 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75277 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75277 00:14:54.592 killing process with pid 75277 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75277' 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75277 00:14:54.592 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75277 00:14:54.851 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:54.851 13:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.110 rmmod nvme_tcp 00:14:55.110 rmmod nvme_fabrics 00:14:55.110 rmmod nvme_keyring 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75022 ']' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75022 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75022 ']' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75022 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75022 00:14:55.110 killing process with pid 75022 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75022' 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75022 00:14:55.110 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75022 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:55.369 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:14:55.628 ************************************ 00:14:55.628 END TEST nvmf_failover 00:14:55.628 ************************************ 00:14:55.628 00:14:55.628 real 0m32.421s 00:14:55.628 user 2m4.808s 00:14:55.628 sys 0m5.267s 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:55.628 ************************************ 00:14:55.628 START TEST nvmf_host_discovery 00:14:55.628 ************************************ 00:14:55.628 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:55.888 * Looking for test storage... 00:14:55.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:55.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.888 --rc genhtml_branch_coverage=1 00:14:55.888 --rc genhtml_function_coverage=1 00:14:55.888 --rc genhtml_legend=1 00:14:55.888 --rc geninfo_all_blocks=1 00:14:55.888 --rc geninfo_unexecuted_blocks=1 00:14:55.888 00:14:55.888 ' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:55.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.888 --rc genhtml_branch_coverage=1 00:14:55.888 --rc genhtml_function_coverage=1 00:14:55.888 --rc genhtml_legend=1 00:14:55.888 --rc geninfo_all_blocks=1 00:14:55.888 --rc geninfo_unexecuted_blocks=1 00:14:55.888 00:14:55.888 ' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:55.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.888 --rc genhtml_branch_coverage=1 00:14:55.888 --rc genhtml_function_coverage=1 00:14:55.888 --rc genhtml_legend=1 00:14:55.888 --rc geninfo_all_blocks=1 00:14:55.888 --rc geninfo_unexecuted_blocks=1 00:14:55.888 00:14:55.888 ' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:55.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.888 --rc genhtml_branch_coverage=1 00:14:55.888 --rc genhtml_function_coverage=1 00:14:55.888 --rc genhtml_legend=1 00:14:55.888 --rc geninfo_all_blocks=1 00:14:55.888 --rc geninfo_unexecuted_blocks=1 00:14:55.888 00:14:55.888 ' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.888 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.889 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:55.889 Cannot find device "nvmf_init_br" 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:55.889 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:55.889 Cannot find device "nvmf_init_br2" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:55.889 Cannot find device "nvmf_tgt_br" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.889 Cannot find device "nvmf_tgt_br2" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:55.889 Cannot find device "nvmf_init_br" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:55.889 Cannot find device "nvmf_init_br2" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:55.889 Cannot find device "nvmf_tgt_br" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:55.889 Cannot find device "nvmf_tgt_br2" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:55.889 Cannot find device "nvmf_br" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:55.889 Cannot find device "nvmf_init_if" 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:14:55.889 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:56.148 Cannot find device "nvmf_init_if2" 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:56.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:56.148 00:14:56.148 --- 10.0.0.3 ping statistics --- 00:14:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.148 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:56.148 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:56.148 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:56.148 00:14:56.148 --- 10.0.0.4 ping statistics --- 00:14:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.148 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:56.148 00:14:56.148 --- 10.0.0.1 ping statistics --- 00:14:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.148 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:56.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:56.148 00:14:56.148 --- 10.0.0.2 ping statistics --- 00:14:56.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.148 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75672 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75672 00:14:56.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75672 ']' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.148 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.407 [2024-11-17 13:24:45.424836] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:56.407 [2024-11-17 13:24:45.425067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.407 [2024-11-17 13:24:45.571835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.407 [2024-11-17 13:24:45.621019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.407 [2024-11-17 13:24:45.621287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.407 [2024-11-17 13:24:45.621419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.407 [2024-11-17 13:24:45.621534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.407 [2024-11-17 13:24:45.621571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.407 [2024-11-17 13:24:45.622007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.666 [2024-11-17 13:24:45.693582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.666 [2024-11-17 13:24:45.808917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.666 [2024-11-17 13:24:45.817105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:56.666 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.667 null0 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.667 null1 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.667 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75702 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75702 /tmp/host.sock 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75702 ']' 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.667 13:24:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.925 [2024-11-17 13:24:45.907930] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:56.925 [2024-11-17 13:24:45.908191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75702 ] 00:14:56.925 [2024-11-17 13:24:46.059136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.925 [2024-11-17 13:24:46.112186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.184 [2024-11-17 13:24:46.170774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:57.443 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 [2024-11-17 13:24:46.597131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.444 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.703 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:14:57.704 13:24:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:14:58.271 [2024-11-17 13:24:47.261951] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:58.271 [2024-11-17 13:24:47.262155] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:58.271 [2024-11-17 13:24:47.262191] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:58.271 [2024-11-17 13:24:47.268009] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:58.271 [2024-11-17 13:24:47.322520] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:58.271 [2024-11-17 13:24:47.323465] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18ffe50:1 started. 00:14:58.271 [2024-11-17 13:24:47.325207] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:58.271 [2024-11-17 13:24:47.325230] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:58.271 [2024-11-17 13:24:47.330485] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18ffe50 was disconnected and freed. delete nvme_qpair. 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:58.839 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:58.840 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:58.840 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:59.099 [2024-11-17 13:24:48.073954] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x190df80:1 started. 00:14:59.099 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:59.100 [2024-11-17 13:24:48.080804] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x190df80 was disconnected and freed. delete nvme_qpair. 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 [2024-11-17 13:24:48.182183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:59.100 [2024-11-17 13:24:48.182964] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:59.100 [2024-11-17 13:24:48.182988] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:59.100 [2024-11-17 13:24:48.188973] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:59.100 [2024-11-17 13:24:48.247316] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:14:59.100 [2024-11-17 13:24:48.247361] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:59.100 [2024-11-17 13:24:48.247371] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:59.100 [2024-11-17 13:24:48.247390] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:59.100 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.360 [2024-11-17 13:24:48.411346] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:59.360 [2024-11-17 13:24:48.411372] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:59.360 [2024-11-17 13:24:48.412659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.360 [2024-11-17 13:24:48.412688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.360 [2024-11-17 13:24:48.412698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.360 [2024-11-17 13:24:48.412706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.360 [2024-11-17 13:24:48.412715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.360 [2024-11-17 13:24:48.412722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.360 [2024-11-17 13:24:48.412730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.360 [2024-11-17 13:24:48.412737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.360 [2024-11-17 13:24:48.412744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc230 is same with the state(6) to be set 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.360 [2024-11-17 13:24:48.417371] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:59.360 [2024-11-17 13:24:48.417527] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:59.360 [2024-11-17 13:24:48.417589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc230 (9): Bad file descriptor 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:59.360 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:59.361 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.620 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.621 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.997 [2024-11-17 13:24:49.842389] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:00.997 [2024-11-17 13:24:49.842411] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:00.997 [2024-11-17 13:24:49.842428] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:00.997 [2024-11-17 13:24:49.848421] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:00.997 [2024-11-17 13:24:49.906676] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:00.997 [2024-11-17 13:24:49.907449] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x18d4dd0:1 started. 00:15:00.997 [2024-11-17 13:24:49.909354] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:00.997 [2024-11-17 13:24:49.909387] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 [2024-11-17 13:24:49.911251] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x18d4dd0 was disconnected and freed. delete nvme_qpair. 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.997 request: 00:15:00.997 { 00:15:00.997 "name": "nvme", 00:15:00.997 "trtype": "tcp", 00:15:00.997 "traddr": "10.0.0.3", 00:15:00.997 "adrfam": "ipv4", 00:15:00.997 "trsvcid": "8009", 00:15:00.997 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:00.997 "wait_for_attach": true, 00:15:00.997 "method": "bdev_nvme_start_discovery", 00:15:00.997 "req_id": 1 00:15:00.997 } 00:15:00.997 Got JSON-RPC error response 00:15:00.997 response: 00:15:00.997 { 00:15:00.997 "code": -17, 00:15:00.997 "message": "File exists" 00:15:00.997 } 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.997 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.997 request: 00:15:00.997 { 00:15:00.997 "name": "nvme_second", 00:15:00.997 "trtype": "tcp", 00:15:00.997 "traddr": "10.0.0.3", 00:15:00.997 "adrfam": "ipv4", 00:15:00.997 "trsvcid": "8009", 00:15:00.997 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:00.997 "wait_for_attach": true, 00:15:00.997 "method": "bdev_nvme_start_discovery", 00:15:00.997 "req_id": 1 00:15:00.997 } 00:15:00.997 Got JSON-RPC error response 00:15:00.997 response: 00:15:00.997 { 00:15:00.997 "code": -17, 00:15:00.997 "message": "File exists" 00:15:00.997 } 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.997 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.998 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.373 [2024-11-17 13:24:51.185600] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:02.373 [2024-11-17 13:24:51.185645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1900e40 with addr=10.0.0.3, port=8010 00:15:02.373 [2024-11-17 13:24:51.185662] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:02.373 [2024-11-17 13:24:51.185671] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:02.373 [2024-11-17 13:24:51.185678] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:03.310 [2024-11-17 13:24:52.185592] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:03.310 [2024-11-17 13:24:52.185632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1900e40 with addr=10.0.0.3, port=8010 00:15:03.310 [2024-11-17 13:24:52.185648] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:03.310 [2024-11-17 13:24:52.185655] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:03.310 [2024-11-17 13:24:52.185662] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:04.247 [2024-11-17 13:24:53.185526] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:04.247 request: 00:15:04.247 { 00:15:04.247 "name": "nvme_second", 00:15:04.247 "trtype": "tcp", 00:15:04.247 "traddr": "10.0.0.3", 00:15:04.247 "adrfam": "ipv4", 00:15:04.247 "trsvcid": "8010", 00:15:04.247 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:04.247 "wait_for_attach": false, 00:15:04.247 "attach_timeout_ms": 3000, 00:15:04.247 "method": "bdev_nvme_start_discovery", 00:15:04.247 "req_id": 1 00:15:04.247 } 00:15:04.247 Got JSON-RPC error response 00:15:04.247 response: 00:15:04.247 { 00:15:04.247 "code": -110, 00:15:04.247 "message": "Connection timed out" 00:15:04.247 } 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75702 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:04.247 rmmod nvme_tcp 00:15:04.247 rmmod nvme_fabrics 00:15:04.247 rmmod nvme_keyring 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75672 ']' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75672 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75672 ']' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75672 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75672 00:15:04.247 killing process with pid 75672 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75672' 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75672 00:15:04.247 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75672 00:15:04.506 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:04.506 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:04.506 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:04.507 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:04.765 00:15:04.765 real 0m9.111s 00:15:04.765 user 0m17.122s 00:15:04.765 sys 0m2.062s 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.765 ************************************ 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.765 END TEST nvmf_host_discovery 00:15:04.765 ************************************ 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:04.765 13:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.766 13:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.766 13:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:04.766 ************************************ 00:15:04.766 START TEST nvmf_host_multipath_status 00:15:04.766 ************************************ 00:15:04.766 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:05.026 * Looking for test storage... 00:15:05.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:05.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.026 --rc genhtml_branch_coverage=1 00:15:05.026 --rc genhtml_function_coverage=1 00:15:05.026 --rc genhtml_legend=1 00:15:05.026 --rc geninfo_all_blocks=1 00:15:05.026 --rc geninfo_unexecuted_blocks=1 00:15:05.026 00:15:05.026 ' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:05.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.026 --rc genhtml_branch_coverage=1 00:15:05.026 --rc genhtml_function_coverage=1 00:15:05.026 --rc genhtml_legend=1 00:15:05.026 --rc geninfo_all_blocks=1 00:15:05.026 --rc geninfo_unexecuted_blocks=1 00:15:05.026 00:15:05.026 ' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:05.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.026 --rc genhtml_branch_coverage=1 00:15:05.026 --rc genhtml_function_coverage=1 00:15:05.026 --rc genhtml_legend=1 00:15:05.026 --rc geninfo_all_blocks=1 00:15:05.026 --rc geninfo_unexecuted_blocks=1 00:15:05.026 00:15:05.026 ' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:05.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.026 --rc genhtml_branch_coverage=1 00:15:05.026 --rc genhtml_function_coverage=1 00:15:05.026 --rc genhtml_legend=1 00:15:05.026 --rc geninfo_all_blocks=1 00:15:05.026 --rc geninfo_unexecuted_blocks=1 00:15:05.026 00:15:05.026 ' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.026 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.027 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:05.027 Cannot find device "nvmf_init_br" 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:05.027 Cannot find device "nvmf_init_br2" 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:05.027 Cannot find device "nvmf_tgt_br" 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.027 Cannot find device "nvmf_tgt_br2" 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:05.027 Cannot find device "nvmf_init_br" 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:05.027 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:05.287 Cannot find device "nvmf_init_br2" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:05.287 Cannot find device "nvmf_tgt_br" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:05.287 Cannot find device "nvmf_tgt_br2" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:05.287 Cannot find device "nvmf_br" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:05.287 Cannot find device "nvmf_init_if" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:05.287 Cannot find device "nvmf_init_if2" 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:05.287 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:05.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:15:05.546 00:15:05.546 --- 10.0.0.3 ping statistics --- 00:15:05.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.546 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:05.546 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:05.546 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:15:05.546 00:15:05.546 --- 10.0.0.4 ping statistics --- 00:15:05.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.546 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:05.546 00:15:05.546 --- 10.0.0.1 ping statistics --- 00:15:05.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.546 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:05.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:05.546 00:15:05.546 --- 10.0.0.2 ping statistics --- 00:15:05.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.546 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:05.546 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76195 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76195 00:15:05.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76195 ']' 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.547 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:05.547 [2024-11-17 13:24:54.682045] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:05.547 [2024-11-17 13:24:54.682134] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.806 [2024-11-17 13:24:54.827799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:05.806 [2024-11-17 13:24:54.872186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.806 [2024-11-17 13:24:54.872428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.806 [2024-11-17 13:24:54.872591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.806 [2024-11-17 13:24:54.872730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.806 [2024-11-17 13:24:54.872789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.806 [2024-11-17 13:24:54.873972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.806 [2024-11-17 13:24:54.873984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.806 [2024-11-17 13:24:54.928767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.806 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.806 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:05.806 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.806 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.806 13:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:06.065 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.065 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76195 00:15:06.065 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:06.324 [2024-11-17 13:24:55.323728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.324 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:06.582 Malloc0 00:15:06.582 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:06.841 13:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.099 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:07.358 [2024-11-17 13:24:56.397121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.358 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:07.617 [2024-11-17 13:24:56.681179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76243 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76243 /var/tmp/bdevperf.sock 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76243 ']' 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.617 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:08.554 13:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.554 13:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:08.555 13:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:08.814 13:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:09.071 Nvme0n1 00:15:09.071 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:09.637 Nvme0n1 00:15:09.637 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:09.638 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:11.541 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:11.541 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:11.800 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:12.060 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:12.996 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:12.996 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:12.996 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:12.996 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:13.255 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:13.255 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:13.255 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:13.255 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.514 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:13.514 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:13.514 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:13.514 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:13.772 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:13.772 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:13.772 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:13.772 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.030 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:14.030 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:14.030 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:14.030 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.287 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:14.287 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:14.287 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:14.287 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:14.545 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:14.545 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:14.545 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:14.804 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:15.063 13:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:15.999 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:15.999 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:15.999 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:15.999 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:16.258 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:16.258 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:16.258 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:16.258 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.516 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.516 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:16.516 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:16.516 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:16.775 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:16.775 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:16.775 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:16.775 13:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:17.032 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:17.032 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:17.032 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:17.032 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:17.290 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:17.290 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:17.290 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:17.290 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:17.549 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:17.549 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:17.549 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:17.808 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:18.066 13:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:19.002 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:19.002 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:19.002 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.002 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:19.261 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.261 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:19.261 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.261 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:19.520 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:19.520 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:19.520 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.520 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:19.779 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.779 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:19.779 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:19.779 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.037 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.037 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:20.037 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:20.037 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:20.603 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:20.862 13:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:21.121 13:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:22.056 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:22.056 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:22.056 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.056 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:22.314 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.314 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:22.573 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.573 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:22.831 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:22.831 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:22.831 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.831 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:22.831 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.831 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:22.831 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.831 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:23.113 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.114 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:23.114 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.114 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:23.372 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.372 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:23.372 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.372 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:23.631 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:23.631 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:23.631 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:23.890 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:24.147 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:25.083 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:25.083 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:25.083 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:25.083 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.346 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:25.346 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:25.346 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.346 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:25.605 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:25.605 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:25.605 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.605 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.171 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.430 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:26.689 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:26.689 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:26.689 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:26.947 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:27.206 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:28.582 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.841 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.841 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:28.841 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.841 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.100 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.667 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:29.926 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:29.926 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:30.190 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:30.448 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:31.385 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:31.385 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:31.385 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.385 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:31.644 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.644 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:31.644 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.644 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:31.903 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.903 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:31.903 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:31.903 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.162 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.162 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.421 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:32.680 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.680 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:32.680 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.680 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:32.940 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.940 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:32.940 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:33.199 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:33.458 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:34.395 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:34.395 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:34.395 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.395 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:34.654 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:34.654 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:34.654 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:34.654 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.913 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.913 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:34.913 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:34.913 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.172 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:35.172 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:35.172 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.172 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:35.431 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:35.431 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:35.431 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.431 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:35.690 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:35.690 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:35.690 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:35.690 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.949 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:35.949 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:35.949 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:36.208 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:36.472 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:37.504 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.762 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.762 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:37.762 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.762 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:38.020 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.020 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:38.020 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.020 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:38.278 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.278 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:38.278 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.278 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:38.536 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.536 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:38.536 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.536 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:38.795 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.795 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:38.795 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:39.054 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:39.313 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:40.247 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:40.247 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:40.247 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.248 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:40.506 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.506 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:40.506 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.506 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:40.764 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:40.764 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:40.764 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.764 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:41.023 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.023 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:41.023 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.023 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:41.281 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.281 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:41.281 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.281 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:41.540 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.540 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:41.540 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.540 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76243 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76243 ']' 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76243 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76243 00:15:41.800 killing process with pid 76243 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76243' 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76243 00:15:41.800 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76243 00:15:41.800 { 00:15:41.800 "results": [ 00:15:41.800 { 00:15:41.800 "job": "Nvme0n1", 00:15:41.800 "core_mask": "0x4", 00:15:41.800 "workload": "verify", 00:15:41.800 "status": "terminated", 00:15:41.800 "verify_range": { 00:15:41.800 "start": 0, 00:15:41.800 "length": 16384 00:15:41.800 }, 00:15:41.800 "queue_depth": 128, 00:15:41.800 "io_size": 4096, 00:15:41.800 "runtime": 32.201872, 00:15:41.800 "iops": 9096.179253181306, 00:15:41.800 "mibps": 35.53195020773948, 00:15:41.800 "io_failed": 0, 00:15:41.800 "io_timeout": 0, 00:15:41.800 "avg_latency_us": 14046.829849332134, 00:15:41.800 "min_latency_us": 346.29818181818183, 00:15:41.800 "max_latency_us": 4026531.84 00:15:41.800 } 00:15:41.800 ], 00:15:41.800 "core_count": 1 00:15:41.800 } 00:15:42.063 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76243 00:15:42.063 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:42.063 [2024-11-17 13:24:56.754058] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:42.063 [2024-11-17 13:24:56.754164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76243 ] 00:15:42.063 [2024-11-17 13:24:56.895054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.063 [2024-11-17 13:24:56.944426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.063 [2024-11-17 13:24:57.018434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.063 Running I/O for 90 seconds... 00:15:42.063 10577.00 IOPS, 41.32 MiB/s [2024-11-17T13:25:31.287Z] 11026.50 IOPS, 43.07 MiB/s [2024-11-17T13:25:31.287Z] 11099.00 IOPS, 43.36 MiB/s [2024-11-17T13:25:31.287Z] 11058.25 IOPS, 43.20 MiB/s [2024-11-17T13:25:31.287Z] 10973.00 IOPS, 42.86 MiB/s [2024-11-17T13:25:31.287Z] 10795.00 IOPS, 42.17 MiB/s [2024-11-17T13:25:31.287Z] 10705.29 IOPS, 41.82 MiB/s [2024-11-17T13:25:31.287Z] 10632.75 IOPS, 41.53 MiB/s [2024-11-17T13:25:31.287Z] 10585.67 IOPS, 41.35 MiB/s [2024-11-17T13:25:31.287Z] 10549.50 IOPS, 41.21 MiB/s [2024-11-17T13:25:31.287Z] 10499.55 IOPS, 41.01 MiB/s [2024-11-17T13:25:31.287Z] 10472.58 IOPS, 40.91 MiB/s [2024-11-17T13:25:31.287Z] 10451.00 IOPS, 40.82 MiB/s [2024-11-17T13:25:31.287Z] 10421.07 IOPS, 40.71 MiB/s [2024-11-17T13:25:31.287Z] [2024-11-17 13:25:12.991146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.991458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.991970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.063 [2024-11-17 13:25:12.992016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.063 [2024-11-17 13:25:12.992212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:42.063 [2024-11-17 13:25:12.992229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.992302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.992975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.992993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.993007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.993037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.993069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.993101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.064 [2024-11-17 13:25:12.993132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:42.064 [2024-11-17 13:25:12.993542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.064 [2024-11-17 13:25:12.993555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.993586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.993625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.993657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.993982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.065 [2024-11-17 13:25:12.994710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.065 [2024-11-17 13:25:12.994824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:42.065 [2024-11-17 13:25:12.994853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:12.994874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.994893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:12.994906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.994924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:12.994937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.994955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:12.994969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:12.995655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.995943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:12.996386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:12.996401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:42.066 9926.87 IOPS, 38.78 MiB/s [2024-11-17T13:25:31.290Z] 9306.44 IOPS, 36.35 MiB/s [2024-11-17T13:25:31.290Z] 8759.00 IOPS, 34.21 MiB/s [2024-11-17T13:25:31.290Z] 8272.39 IOPS, 32.31 MiB/s [2024-11-17T13:25:31.290Z] 8214.53 IOPS, 32.09 MiB/s [2024-11-17T13:25:31.290Z] 8311.00 IOPS, 32.46 MiB/s [2024-11-17T13:25:31.290Z] 8405.33 IOPS, 32.83 MiB/s [2024-11-17T13:25:31.290Z] 8515.59 IOPS, 33.26 MiB/s [2024-11-17T13:25:31.290Z] 8607.09 IOPS, 33.62 MiB/s [2024-11-17T13:25:31.290Z] 8685.79 IOPS, 33.93 MiB/s [2024-11-17T13:25:31.290Z] 8744.44 IOPS, 34.16 MiB/s [2024-11-17T13:25:31.290Z] 8795.81 IOPS, 34.36 MiB/s [2024-11-17T13:25:31.290Z] 8845.33 IOPS, 34.55 MiB/s [2024-11-17T13:25:31.290Z] 8909.29 IOPS, 34.80 MiB/s [2024-11-17T13:25:31.290Z] 8972.59 IOPS, 35.05 MiB/s [2024-11-17T13:25:31.290Z] [2024-11-17 13:25:28.323408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.066 [2024-11-17 13:25:28.323846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.323955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.323968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.326162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.066 [2024-11-17 13:25:28.326195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:42.066 [2024-11-17 13:25:28.326221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:42.067 [2024-11-17 13:25:28.326782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:42.067 [2024-11-17 13:25:28.326895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:42.067 [2024-11-17 13:25:28.326908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:42.067 9024.87 IOPS, 35.25 MiB/s [2024-11-17T13:25:31.291Z] 9062.77 IOPS, 35.40 MiB/s [2024-11-17T13:25:31.291Z] 9092.06 IOPS, 35.52 MiB/s [2024-11-17T13:25:31.291Z] Received shutdown signal, test time was about 32.202512 seconds 00:15:42.067 00:15:42.067 Latency(us) 00:15:42.067 [2024-11-17T13:25:31.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.067 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:42.067 Verification LBA range: start 0x0 length 0x4000 00:15:42.067 Nvme0n1 : 32.20 9096.18 35.53 0.00 0.00 14046.83 346.30 4026531.84 00:15:42.067 [2024-11-17T13:25:31.291Z] =================================================================================================================== 00:15:42.067 [2024-11-17T13:25:31.291Z] Total : 9096.18 35.53 0.00 0.00 14046.83 346.30 4026531.84 00:15:42.067 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.326 rmmod nvme_tcp 00:15:42.326 rmmod nvme_fabrics 00:15:42.326 rmmod nvme_keyring 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76195 ']' 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76195 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76195 ']' 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76195 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76195 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76195' 00:15:42.326 killing process with pid 76195 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76195 00:15:42.326 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76195 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:42.585 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:15:42.844 ************************************ 00:15:42.844 END TEST nvmf_host_multipath_status 00:15:42.844 ************************************ 00:15:42.844 00:15:42.844 real 0m38.030s 00:15:42.844 user 2m1.837s 00:15:42.844 sys 0m11.258s 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.844 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 13:25:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:42.844 13:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.844 13:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.844 13:25:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 ************************************ 00:15:42.844 START TEST nvmf_discovery_remove_ifc 00:15:42.844 ************************************ 00:15:42.844 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:43.104 * Looking for test storage... 00:15:43.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.104 --rc genhtml_branch_coverage=1 00:15:43.104 --rc genhtml_function_coverage=1 00:15:43.104 --rc genhtml_legend=1 00:15:43.104 --rc geninfo_all_blocks=1 00:15:43.104 --rc geninfo_unexecuted_blocks=1 00:15:43.104 00:15:43.104 ' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.104 --rc genhtml_branch_coverage=1 00:15:43.104 --rc genhtml_function_coverage=1 00:15:43.104 --rc genhtml_legend=1 00:15:43.104 --rc geninfo_all_blocks=1 00:15:43.104 --rc geninfo_unexecuted_blocks=1 00:15:43.104 00:15:43.104 ' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.104 --rc genhtml_branch_coverage=1 00:15:43.104 --rc genhtml_function_coverage=1 00:15:43.104 --rc genhtml_legend=1 00:15:43.104 --rc geninfo_all_blocks=1 00:15:43.104 --rc geninfo_unexecuted_blocks=1 00:15:43.104 00:15:43.104 ' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.104 --rc genhtml_branch_coverage=1 00:15:43.104 --rc genhtml_function_coverage=1 00:15:43.104 --rc genhtml_legend=1 00:15:43.104 --rc geninfo_all_blocks=1 00:15:43.104 --rc geninfo_unexecuted_blocks=1 00:15:43.104 00:15:43.104 ' 00:15:43.104 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:43.105 Cannot find device "nvmf_init_br" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:43.105 Cannot find device "nvmf_init_br2" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:43.105 Cannot find device "nvmf_tgt_br" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.105 Cannot find device "nvmf_tgt_br2" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:43.105 Cannot find device "nvmf_init_br" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:43.105 Cannot find device "nvmf_init_br2" 00:15:43.105 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:15:43.106 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:43.106 Cannot find device "nvmf_tgt_br" 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:43.365 Cannot find device "nvmf_tgt_br2" 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:43.365 Cannot find device "nvmf_br" 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:43.365 Cannot find device "nvmf_init_if" 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:43.365 Cannot find device "nvmf_init_if2" 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.365 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:43.624 00:15:43.624 --- 10.0.0.3 ping statistics --- 00:15:43.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.624 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.624 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.624 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:43.624 00:15:43.624 --- 10.0.0.4 ping statistics --- 00:15:43.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.624 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:43.624 00:15:43.624 --- 10.0.0.1 ping statistics --- 00:15:43.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.624 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:43.624 00:15:43.624 --- 10.0.0.2 ping statistics --- 00:15:43.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.624 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77075 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77075 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77075 ']' 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.624 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.625 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.625 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.625 13:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:43.625 [2024-11-17 13:25:32.730308] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:43.625 [2024-11-17 13:25:32.730402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.883 [2024-11-17 13:25:32.879168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.883 [2024-11-17 13:25:32.938078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.883 [2024-11-17 13:25:32.938140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.883 [2024-11-17 13:25:32.938155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.883 [2024-11-17 13:25:32.938166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.883 [2024-11-17 13:25:32.938175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.883 [2024-11-17 13:25:32.938637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.883 [2024-11-17 13:25:33.002557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.883 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.883 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:43.883 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.883 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.883 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.142 [2024-11-17 13:25:33.128102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.142 [2024-11-17 13:25:33.136247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:44.142 null0 00:15:44.142 [2024-11-17 13:25:33.168137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77099 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77099 /tmp/host.sock 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77099 ']' 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:44.142 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.142 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.142 [2024-11-17 13:25:33.253165] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:44.142 [2024-11-17 13:25:33.253270] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77099 ] 00:15:44.401 [2024-11-17 13:25:33.407329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.401 [2024-11-17 13:25:33.459911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:44.401 [2024-11-17 13:25:33.577848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.401 13:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:45.777 [2024-11-17 13:25:34.631315] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:45.777 [2024-11-17 13:25:34.631346] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:45.777 [2024-11-17 13:25:34.631367] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:45.777 [2024-11-17 13:25:34.637371] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:45.778 [2024-11-17 13:25:34.691710] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:45.778 [2024-11-17 13:25:34.692706] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1de6fb0:1 started. 00:15:45.778 [2024-11-17 13:25:34.694366] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:45.778 [2024-11-17 13:25:34.694427] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:45.778 [2024-11-17 13:25:34.694452] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:45.778 [2024-11-17 13:25:34.694467] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:45.778 [2024-11-17 13:25:34.694492] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:45.778 [2024-11-17 13:25:34.699735] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1de6fb0 was disconnected and freed. delete nvme_qpair. 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:45.778 13:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:46.713 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:46.714 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:46.714 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.714 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:46.714 13:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:48.089 13:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:49.026 13:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:49.964 13:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.898 13:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.157 [2024-11-17 13:25:40.122228] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:51.157 [2024-11-17 13:25:40.122295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.157 [2024-11-17 13:25:40.122310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.157 [2024-11-17 13:25:40.122321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.157 [2024-11-17 13:25:40.122330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.157 [2024-11-17 13:25:40.122339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.157 [2024-11-17 13:25:40.122347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.157 [2024-11-17 13:25:40.122356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.157 [2024-11-17 13:25:40.122365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.157 [2024-11-17 13:25:40.122373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.157 [2024-11-17 13:25:40.122381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.157 [2024-11-17 13:25:40.122389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3240 is same with the state(6) to be set 00:15:51.157 [2024-11-17 13:25:40.132227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3240 (9): Bad file descriptor 00:15:51.157 [2024-11-17 13:25:40.142242] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:51.157 [2024-11-17 13:25:40.142263] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:51.157 [2024-11-17 13:25:40.142272] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:51.157 [2024-11-17 13:25:40.142278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:51.157 [2024-11-17 13:25:40.142307] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.093 [2024-11-17 13:25:41.197887] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:52.093 [2024-11-17 13:25:41.197991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3240 with addr=10.0.0.3, port=4420 00:15:52.093 [2024-11-17 13:25:41.198023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3240 is same with the state(6) to be set 00:15:52.093 [2024-11-17 13:25:41.198076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3240 (9): Bad file descriptor 00:15:52.093 [2024-11-17 13:25:41.198929] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:52.093 [2024-11-17 13:25:41.199023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:52.093 [2024-11-17 13:25:41.199058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:52.093 [2024-11-17 13:25:41.199088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:52.093 [2024-11-17 13:25:41.199109] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:52.093 [2024-11-17 13:25:41.199124] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:52.093 [2024-11-17 13:25:41.199136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:52.093 [2024-11-17 13:25:41.199157] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:52.093 [2024-11-17 13:25:41.199170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.093 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.030 [2024-11-17 13:25:42.199221] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:53.030 [2024-11-17 13:25:42.199246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:53.030 [2024-11-17 13:25:42.199263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:53.030 [2024-11-17 13:25:42.199285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:53.030 [2024-11-17 13:25:42.199292] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:53.030 [2024-11-17 13:25:42.199300] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:53.030 [2024-11-17 13:25:42.199305] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:53.030 [2024-11-17 13:25:42.199309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:53.030 [2024-11-17 13:25:42.199331] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:53.030 [2024-11-17 13:25:42.199357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.030 [2024-11-17 13:25:42.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.030 [2024-11-17 13:25:42.199380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.030 [2024-11-17 13:25:42.199388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.030 [2024-11-17 13:25:42.199397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.030 [2024-11-17 13:25:42.199405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.030 [2024-11-17 13:25:42.199413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.030 [2024-11-17 13:25:42.199421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.030 [2024-11-17 13:25:42.199430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.030 [2024-11-17 13:25:42.199437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.030 [2024-11-17 13:25:42.199445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:53.030 [2024-11-17 13:25:42.199987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4ea20 (9): Bad file descriptor 00:15:53.030 [2024-11-17 13:25:42.200998] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:53.030 [2024-11-17 13:25:42.201036] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.030 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:53.289 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:54.225 13:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.162 [2024-11-17 13:25:44.204514] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:55.162 [2024-11-17 13:25:44.204539] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:55.162 [2024-11-17 13:25:44.204560] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.162 [2024-11-17 13:25:44.210547] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:55.162 [2024-11-17 13:25:44.264831] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:55.162 [2024-11-17 13:25:44.265635] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1def290:1 started. 00:15:55.162 [2024-11-17 13:25:44.266895] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:55.162 [2024-11-17 13:25:44.267053] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:55.162 [2024-11-17 13:25:44.267112] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:55.162 [2024-11-17 13:25:44.267214] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:55.162 [2024-11-17 13:25:44.267337] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:55.162 [2024-11-17 13:25:44.273259] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1def290 was disconnected and freed. delete nvme_qpair. 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77099 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77099 ']' 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77099 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77099 00:15:55.421 killing process with pid 77099 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77099' 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77099 00:15:55.421 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77099 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.680 rmmod nvme_tcp 00:15:55.680 rmmod nvme_fabrics 00:15:55.680 rmmod nvme_keyring 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77075 ']' 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77075 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77075 ']' 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77075 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:55.680 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77075 00:15:55.681 killing process with pid 77075 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77075' 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77075 00:15:55.681 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77075 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.939 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:15:56.198 00:15:56.198 real 0m13.284s 00:15:56.198 user 0m22.363s 00:15:56.198 sys 0m2.521s 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.198 ************************************ 00:15:56.198 END TEST nvmf_discovery_remove_ifc 00:15:56.198 ************************************ 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.198 ************************************ 00:15:56.198 START TEST nvmf_identify_kernel_target 00:15:56.198 ************************************ 00:15:56.198 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:56.458 * Looking for test storage... 00:15:56.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.458 --rc genhtml_branch_coverage=1 00:15:56.458 --rc genhtml_function_coverage=1 00:15:56.458 --rc genhtml_legend=1 00:15:56.458 --rc geninfo_all_blocks=1 00:15:56.458 --rc geninfo_unexecuted_blocks=1 00:15:56.458 00:15:56.458 ' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.458 --rc genhtml_branch_coverage=1 00:15:56.458 --rc genhtml_function_coverage=1 00:15:56.458 --rc genhtml_legend=1 00:15:56.458 --rc geninfo_all_blocks=1 00:15:56.458 --rc geninfo_unexecuted_blocks=1 00:15:56.458 00:15:56.458 ' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.458 --rc genhtml_branch_coverage=1 00:15:56.458 --rc genhtml_function_coverage=1 00:15:56.458 --rc genhtml_legend=1 00:15:56.458 --rc geninfo_all_blocks=1 00:15:56.458 --rc geninfo_unexecuted_blocks=1 00:15:56.458 00:15:56.458 ' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.458 --rc genhtml_branch_coverage=1 00:15:56.458 --rc genhtml_function_coverage=1 00:15:56.458 --rc genhtml_legend=1 00:15:56.458 --rc geninfo_all_blocks=1 00:15:56.458 --rc geninfo_unexecuted_blocks=1 00:15:56.458 00:15:56.458 ' 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.458 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.459 Cannot find device "nvmf_init_br" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.459 Cannot find device "nvmf_init_br2" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.459 Cannot find device "nvmf_tgt_br" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.459 Cannot find device "nvmf_tgt_br2" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.459 Cannot find device "nvmf_init_br" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:56.459 Cannot find device "nvmf_init_br2" 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:15:56.459 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:56.718 Cannot find device "nvmf_tgt_br" 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:56.718 Cannot find device "nvmf_tgt_br2" 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:56.718 Cannot find device "nvmf_br" 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:56.718 Cannot find device "nvmf_init_if" 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:56.718 Cannot find device "nvmf_init_if2" 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.718 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.719 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:56.978 00:15:56.978 --- 10.0.0.3 ping statistics --- 00:15:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.978 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:56.978 00:15:56.978 --- 10.0.0.4 ping statistics --- 00:15:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.978 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:56.978 00:15:56.978 --- 10.0.0.1 ping statistics --- 00:15:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.978 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:15:56.978 00:15:56.978 --- 10.0.0.2 ping statistics --- 00:15:56.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.978 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:56.978 13:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:56.978 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:56.978 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:57.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:57.238 Waiting for block devices as requested 00:15:57.238 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:57.497 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:57.497 No valid GPT data, bailing 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:57.497 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:57.756 No valid GPT data, bailing 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:57.756 No valid GPT data, bailing 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:57.756 No valid GPT data, bailing 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:57.756 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -a 10.0.0.1 -t tcp -s 4420 00:15:58.016 00:15:58.016 Discovery Log Number of Records 2, Generation counter 2 00:15:58.016 =====Discovery Log Entry 0====== 00:15:58.016 trtype: tcp 00:15:58.016 adrfam: ipv4 00:15:58.016 subtype: current discovery subsystem 00:15:58.016 treq: not specified, sq flow control disable supported 00:15:58.016 portid: 1 00:15:58.016 trsvcid: 4420 00:15:58.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:58.016 traddr: 10.0.0.1 00:15:58.016 eflags: none 00:15:58.016 sectype: none 00:15:58.016 =====Discovery Log Entry 1====== 00:15:58.016 trtype: tcp 00:15:58.016 adrfam: ipv4 00:15:58.016 subtype: nvme subsystem 00:15:58.016 treq: not specified, sq flow control disable supported 00:15:58.016 portid: 1 00:15:58.016 trsvcid: 4420 00:15:58.016 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:58.016 traddr: 10.0.0.1 00:15:58.016 eflags: none 00:15:58.016 sectype: none 00:15:58.016 13:25:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:58.016 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:58.016 ===================================================== 00:15:58.016 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:58.016 ===================================================== 00:15:58.016 Controller Capabilities/Features 00:15:58.016 ================================ 00:15:58.016 Vendor ID: 0000 00:15:58.016 Subsystem Vendor ID: 0000 00:15:58.016 Serial Number: 5e10261d2f987b2c9c8e 00:15:58.016 Model Number: Linux 00:15:58.016 Firmware Version: 6.8.9-20 00:15:58.016 Recommended Arb Burst: 0 00:15:58.016 IEEE OUI Identifier: 00 00 00 00:15:58.016 Multi-path I/O 00:15:58.016 May have multiple subsystem ports: No 00:15:58.016 May have multiple controllers: No 00:15:58.016 Associated with SR-IOV VF: No 00:15:58.016 Max Data Transfer Size: Unlimited 00:15:58.016 Max Number of Namespaces: 0 00:15:58.016 Max Number of I/O Queues: 1024 00:15:58.016 NVMe Specification Version (VS): 1.3 00:15:58.016 NVMe Specification Version (Identify): 1.3 00:15:58.016 Maximum Queue Entries: 1024 00:15:58.016 Contiguous Queues Required: No 00:15:58.016 Arbitration Mechanisms Supported 00:15:58.016 Weighted Round Robin: Not Supported 00:15:58.016 Vendor Specific: Not Supported 00:15:58.016 Reset Timeout: 7500 ms 00:15:58.016 Doorbell Stride: 4 bytes 00:15:58.016 NVM Subsystem Reset: Not Supported 00:15:58.016 Command Sets Supported 00:15:58.016 NVM Command Set: Supported 00:15:58.016 Boot Partition: Not Supported 00:15:58.016 Memory Page Size Minimum: 4096 bytes 00:15:58.016 Memory Page Size Maximum: 4096 bytes 00:15:58.016 Persistent Memory Region: Not Supported 00:15:58.016 Optional Asynchronous Events Supported 00:15:58.016 Namespace Attribute Notices: Not Supported 00:15:58.016 Firmware Activation Notices: Not Supported 00:15:58.016 ANA Change Notices: Not Supported 00:15:58.016 PLE Aggregate Log Change Notices: Not Supported 00:15:58.016 LBA Status Info Alert Notices: Not Supported 00:15:58.016 EGE Aggregate Log Change Notices: Not Supported 00:15:58.016 Normal NVM Subsystem Shutdown event: Not Supported 00:15:58.016 Zone Descriptor Change Notices: Not Supported 00:15:58.016 Discovery Log Change Notices: Supported 00:15:58.016 Controller Attributes 00:15:58.016 128-bit Host Identifier: Not Supported 00:15:58.016 Non-Operational Permissive Mode: Not Supported 00:15:58.016 NVM Sets: Not Supported 00:15:58.016 Read Recovery Levels: Not Supported 00:15:58.016 Endurance Groups: Not Supported 00:15:58.016 Predictable Latency Mode: Not Supported 00:15:58.016 Traffic Based Keep ALive: Not Supported 00:15:58.016 Namespace Granularity: Not Supported 00:15:58.016 SQ Associations: Not Supported 00:15:58.016 UUID List: Not Supported 00:15:58.016 Multi-Domain Subsystem: Not Supported 00:15:58.016 Fixed Capacity Management: Not Supported 00:15:58.016 Variable Capacity Management: Not Supported 00:15:58.016 Delete Endurance Group: Not Supported 00:15:58.016 Delete NVM Set: Not Supported 00:15:58.016 Extended LBA Formats Supported: Not Supported 00:15:58.016 Flexible Data Placement Supported: Not Supported 00:15:58.016 00:15:58.016 Controller Memory Buffer Support 00:15:58.016 ================================ 00:15:58.016 Supported: No 00:15:58.016 00:15:58.016 Persistent Memory Region Support 00:15:58.016 ================================ 00:15:58.016 Supported: No 00:15:58.016 00:15:58.016 Admin Command Set Attributes 00:15:58.016 ============================ 00:15:58.016 Security Send/Receive: Not Supported 00:15:58.016 Format NVM: Not Supported 00:15:58.016 Firmware Activate/Download: Not Supported 00:15:58.016 Namespace Management: Not Supported 00:15:58.016 Device Self-Test: Not Supported 00:15:58.016 Directives: Not Supported 00:15:58.016 NVMe-MI: Not Supported 00:15:58.016 Virtualization Management: Not Supported 00:15:58.016 Doorbell Buffer Config: Not Supported 00:15:58.016 Get LBA Status Capability: Not Supported 00:15:58.016 Command & Feature Lockdown Capability: Not Supported 00:15:58.016 Abort Command Limit: 1 00:15:58.016 Async Event Request Limit: 1 00:15:58.016 Number of Firmware Slots: N/A 00:15:58.016 Firmware Slot 1 Read-Only: N/A 00:15:58.016 Firmware Activation Without Reset: N/A 00:15:58.016 Multiple Update Detection Support: N/A 00:15:58.016 Firmware Update Granularity: No Information Provided 00:15:58.016 Per-Namespace SMART Log: No 00:15:58.016 Asymmetric Namespace Access Log Page: Not Supported 00:15:58.016 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:58.016 Command Effects Log Page: Not Supported 00:15:58.016 Get Log Page Extended Data: Supported 00:15:58.016 Telemetry Log Pages: Not Supported 00:15:58.016 Persistent Event Log Pages: Not Supported 00:15:58.016 Supported Log Pages Log Page: May Support 00:15:58.016 Commands Supported & Effects Log Page: Not Supported 00:15:58.016 Feature Identifiers & Effects Log Page:May Support 00:15:58.016 NVMe-MI Commands & Effects Log Page: May Support 00:15:58.016 Data Area 4 for Telemetry Log: Not Supported 00:15:58.016 Error Log Page Entries Supported: 1 00:15:58.016 Keep Alive: Not Supported 00:15:58.016 00:15:58.016 NVM Command Set Attributes 00:15:58.016 ========================== 00:15:58.016 Submission Queue Entry Size 00:15:58.016 Max: 1 00:15:58.016 Min: 1 00:15:58.016 Completion Queue Entry Size 00:15:58.016 Max: 1 00:15:58.016 Min: 1 00:15:58.016 Number of Namespaces: 0 00:15:58.016 Compare Command: Not Supported 00:15:58.016 Write Uncorrectable Command: Not Supported 00:15:58.016 Dataset Management Command: Not Supported 00:15:58.016 Write Zeroes Command: Not Supported 00:15:58.016 Set Features Save Field: Not Supported 00:15:58.016 Reservations: Not Supported 00:15:58.016 Timestamp: Not Supported 00:15:58.016 Copy: Not Supported 00:15:58.016 Volatile Write Cache: Not Present 00:15:58.016 Atomic Write Unit (Normal): 1 00:15:58.016 Atomic Write Unit (PFail): 1 00:15:58.016 Atomic Compare & Write Unit: 1 00:15:58.016 Fused Compare & Write: Not Supported 00:15:58.016 Scatter-Gather List 00:15:58.016 SGL Command Set: Supported 00:15:58.016 SGL Keyed: Not Supported 00:15:58.016 SGL Bit Bucket Descriptor: Not Supported 00:15:58.016 SGL Metadata Pointer: Not Supported 00:15:58.016 Oversized SGL: Not Supported 00:15:58.016 SGL Metadata Address: Not Supported 00:15:58.016 SGL Offset: Supported 00:15:58.016 Transport SGL Data Block: Not Supported 00:15:58.016 Replay Protected Memory Block: Not Supported 00:15:58.016 00:15:58.016 Firmware Slot Information 00:15:58.016 ========================= 00:15:58.016 Active slot: 0 00:15:58.016 00:15:58.016 00:15:58.016 Error Log 00:15:58.016 ========= 00:15:58.016 00:15:58.016 Active Namespaces 00:15:58.016 ================= 00:15:58.016 Discovery Log Page 00:15:58.016 ================== 00:15:58.016 Generation Counter: 2 00:15:58.016 Number of Records: 2 00:15:58.016 Record Format: 0 00:15:58.016 00:15:58.016 Discovery Log Entry 0 00:15:58.016 ---------------------- 00:15:58.016 Transport Type: 3 (TCP) 00:15:58.016 Address Family: 1 (IPv4) 00:15:58.016 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:58.017 Entry Flags: 00:15:58.017 Duplicate Returned Information: 0 00:15:58.017 Explicit Persistent Connection Support for Discovery: 0 00:15:58.017 Transport Requirements: 00:15:58.017 Secure Channel: Not Specified 00:15:58.017 Port ID: 1 (0x0001) 00:15:58.017 Controller ID: 65535 (0xffff) 00:15:58.017 Admin Max SQ Size: 32 00:15:58.017 Transport Service Identifier: 4420 00:15:58.017 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:58.017 Transport Address: 10.0.0.1 00:15:58.017 Discovery Log Entry 1 00:15:58.017 ---------------------- 00:15:58.017 Transport Type: 3 (TCP) 00:15:58.017 Address Family: 1 (IPv4) 00:15:58.017 Subsystem Type: 2 (NVM Subsystem) 00:15:58.017 Entry Flags: 00:15:58.017 Duplicate Returned Information: 0 00:15:58.017 Explicit Persistent Connection Support for Discovery: 0 00:15:58.017 Transport Requirements: 00:15:58.017 Secure Channel: Not Specified 00:15:58.017 Port ID: 1 (0x0001) 00:15:58.017 Controller ID: 65535 (0xffff) 00:15:58.017 Admin Max SQ Size: 32 00:15:58.017 Transport Service Identifier: 4420 00:15:58.017 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:58.017 Transport Address: 10.0.0.1 00:15:58.017 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:58.277 get_feature(0x01) failed 00:15:58.277 get_feature(0x02) failed 00:15:58.277 get_feature(0x04) failed 00:15:58.277 ===================================================== 00:15:58.277 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:58.277 ===================================================== 00:15:58.277 Controller Capabilities/Features 00:15:58.277 ================================ 00:15:58.277 Vendor ID: 0000 00:15:58.277 Subsystem Vendor ID: 0000 00:15:58.277 Serial Number: 50d0732efabd30118de6 00:15:58.277 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:58.277 Firmware Version: 6.8.9-20 00:15:58.277 Recommended Arb Burst: 6 00:15:58.277 IEEE OUI Identifier: 00 00 00 00:15:58.277 Multi-path I/O 00:15:58.277 May have multiple subsystem ports: Yes 00:15:58.277 May have multiple controllers: Yes 00:15:58.277 Associated with SR-IOV VF: No 00:15:58.277 Max Data Transfer Size: Unlimited 00:15:58.277 Max Number of Namespaces: 1024 00:15:58.277 Max Number of I/O Queues: 128 00:15:58.277 NVMe Specification Version (VS): 1.3 00:15:58.277 NVMe Specification Version (Identify): 1.3 00:15:58.277 Maximum Queue Entries: 1024 00:15:58.277 Contiguous Queues Required: No 00:15:58.277 Arbitration Mechanisms Supported 00:15:58.277 Weighted Round Robin: Not Supported 00:15:58.277 Vendor Specific: Not Supported 00:15:58.277 Reset Timeout: 7500 ms 00:15:58.277 Doorbell Stride: 4 bytes 00:15:58.277 NVM Subsystem Reset: Not Supported 00:15:58.277 Command Sets Supported 00:15:58.277 NVM Command Set: Supported 00:15:58.277 Boot Partition: Not Supported 00:15:58.277 Memory Page Size Minimum: 4096 bytes 00:15:58.277 Memory Page Size Maximum: 4096 bytes 00:15:58.277 Persistent Memory Region: Not Supported 00:15:58.277 Optional Asynchronous Events Supported 00:15:58.277 Namespace Attribute Notices: Supported 00:15:58.277 Firmware Activation Notices: Not Supported 00:15:58.277 ANA Change Notices: Supported 00:15:58.277 PLE Aggregate Log Change Notices: Not Supported 00:15:58.277 LBA Status Info Alert Notices: Not Supported 00:15:58.277 EGE Aggregate Log Change Notices: Not Supported 00:15:58.277 Normal NVM Subsystem Shutdown event: Not Supported 00:15:58.277 Zone Descriptor Change Notices: Not Supported 00:15:58.277 Discovery Log Change Notices: Not Supported 00:15:58.277 Controller Attributes 00:15:58.277 128-bit Host Identifier: Supported 00:15:58.277 Non-Operational Permissive Mode: Not Supported 00:15:58.277 NVM Sets: Not Supported 00:15:58.277 Read Recovery Levels: Not Supported 00:15:58.277 Endurance Groups: Not Supported 00:15:58.277 Predictable Latency Mode: Not Supported 00:15:58.277 Traffic Based Keep ALive: Supported 00:15:58.277 Namespace Granularity: Not Supported 00:15:58.277 SQ Associations: Not Supported 00:15:58.277 UUID List: Not Supported 00:15:58.277 Multi-Domain Subsystem: Not Supported 00:15:58.277 Fixed Capacity Management: Not Supported 00:15:58.277 Variable Capacity Management: Not Supported 00:15:58.277 Delete Endurance Group: Not Supported 00:15:58.277 Delete NVM Set: Not Supported 00:15:58.277 Extended LBA Formats Supported: Not Supported 00:15:58.277 Flexible Data Placement Supported: Not Supported 00:15:58.277 00:15:58.277 Controller Memory Buffer Support 00:15:58.277 ================================ 00:15:58.277 Supported: No 00:15:58.277 00:15:58.277 Persistent Memory Region Support 00:15:58.277 ================================ 00:15:58.277 Supported: No 00:15:58.277 00:15:58.277 Admin Command Set Attributes 00:15:58.277 ============================ 00:15:58.277 Security Send/Receive: Not Supported 00:15:58.277 Format NVM: Not Supported 00:15:58.277 Firmware Activate/Download: Not Supported 00:15:58.277 Namespace Management: Not Supported 00:15:58.277 Device Self-Test: Not Supported 00:15:58.277 Directives: Not Supported 00:15:58.277 NVMe-MI: Not Supported 00:15:58.277 Virtualization Management: Not Supported 00:15:58.277 Doorbell Buffer Config: Not Supported 00:15:58.277 Get LBA Status Capability: Not Supported 00:15:58.277 Command & Feature Lockdown Capability: Not Supported 00:15:58.277 Abort Command Limit: 4 00:15:58.277 Async Event Request Limit: 4 00:15:58.277 Number of Firmware Slots: N/A 00:15:58.277 Firmware Slot 1 Read-Only: N/A 00:15:58.277 Firmware Activation Without Reset: N/A 00:15:58.277 Multiple Update Detection Support: N/A 00:15:58.277 Firmware Update Granularity: No Information Provided 00:15:58.277 Per-Namespace SMART Log: Yes 00:15:58.277 Asymmetric Namespace Access Log Page: Supported 00:15:58.277 ANA Transition Time : 10 sec 00:15:58.277 00:15:58.277 Asymmetric Namespace Access Capabilities 00:15:58.277 ANA Optimized State : Supported 00:15:58.277 ANA Non-Optimized State : Supported 00:15:58.277 ANA Inaccessible State : Supported 00:15:58.277 ANA Persistent Loss State : Supported 00:15:58.277 ANA Change State : Supported 00:15:58.277 ANAGRPID is not changed : No 00:15:58.277 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:58.277 00:15:58.277 ANA Group Identifier Maximum : 128 00:15:58.277 Number of ANA Group Identifiers : 128 00:15:58.277 Max Number of Allowed Namespaces : 1024 00:15:58.277 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:58.277 Command Effects Log Page: Supported 00:15:58.277 Get Log Page Extended Data: Supported 00:15:58.277 Telemetry Log Pages: Not Supported 00:15:58.277 Persistent Event Log Pages: Not Supported 00:15:58.277 Supported Log Pages Log Page: May Support 00:15:58.277 Commands Supported & Effects Log Page: Not Supported 00:15:58.277 Feature Identifiers & Effects Log Page:May Support 00:15:58.277 NVMe-MI Commands & Effects Log Page: May Support 00:15:58.277 Data Area 4 for Telemetry Log: Not Supported 00:15:58.277 Error Log Page Entries Supported: 128 00:15:58.277 Keep Alive: Supported 00:15:58.277 Keep Alive Granularity: 1000 ms 00:15:58.277 00:15:58.277 NVM Command Set Attributes 00:15:58.277 ========================== 00:15:58.277 Submission Queue Entry Size 00:15:58.277 Max: 64 00:15:58.277 Min: 64 00:15:58.277 Completion Queue Entry Size 00:15:58.277 Max: 16 00:15:58.277 Min: 16 00:15:58.277 Number of Namespaces: 1024 00:15:58.277 Compare Command: Not Supported 00:15:58.277 Write Uncorrectable Command: Not Supported 00:15:58.277 Dataset Management Command: Supported 00:15:58.277 Write Zeroes Command: Supported 00:15:58.277 Set Features Save Field: Not Supported 00:15:58.277 Reservations: Not Supported 00:15:58.277 Timestamp: Not Supported 00:15:58.277 Copy: Not Supported 00:15:58.277 Volatile Write Cache: Present 00:15:58.278 Atomic Write Unit (Normal): 1 00:15:58.278 Atomic Write Unit (PFail): 1 00:15:58.278 Atomic Compare & Write Unit: 1 00:15:58.278 Fused Compare & Write: Not Supported 00:15:58.278 Scatter-Gather List 00:15:58.278 SGL Command Set: Supported 00:15:58.278 SGL Keyed: Not Supported 00:15:58.278 SGL Bit Bucket Descriptor: Not Supported 00:15:58.278 SGL Metadata Pointer: Not Supported 00:15:58.278 Oversized SGL: Not Supported 00:15:58.278 SGL Metadata Address: Not Supported 00:15:58.278 SGL Offset: Supported 00:15:58.278 Transport SGL Data Block: Not Supported 00:15:58.278 Replay Protected Memory Block: Not Supported 00:15:58.278 00:15:58.278 Firmware Slot Information 00:15:58.278 ========================= 00:15:58.278 Active slot: 0 00:15:58.278 00:15:58.278 Asymmetric Namespace Access 00:15:58.278 =========================== 00:15:58.278 Change Count : 0 00:15:58.278 Number of ANA Group Descriptors : 1 00:15:58.278 ANA Group Descriptor : 0 00:15:58.278 ANA Group ID : 1 00:15:58.278 Number of NSID Values : 1 00:15:58.278 Change Count : 0 00:15:58.278 ANA State : 1 00:15:58.278 Namespace Identifier : 1 00:15:58.278 00:15:58.278 Commands Supported and Effects 00:15:58.278 ============================== 00:15:58.278 Admin Commands 00:15:58.278 -------------- 00:15:58.278 Get Log Page (02h): Supported 00:15:58.278 Identify (06h): Supported 00:15:58.278 Abort (08h): Supported 00:15:58.278 Set Features (09h): Supported 00:15:58.278 Get Features (0Ah): Supported 00:15:58.278 Asynchronous Event Request (0Ch): Supported 00:15:58.278 Keep Alive (18h): Supported 00:15:58.278 I/O Commands 00:15:58.278 ------------ 00:15:58.278 Flush (00h): Supported 00:15:58.278 Write (01h): Supported LBA-Change 00:15:58.278 Read (02h): Supported 00:15:58.278 Write Zeroes (08h): Supported LBA-Change 00:15:58.278 Dataset Management (09h): Supported 00:15:58.278 00:15:58.278 Error Log 00:15:58.278 ========= 00:15:58.278 Entry: 0 00:15:58.278 Error Count: 0x3 00:15:58.278 Submission Queue Id: 0x0 00:15:58.278 Command Id: 0x5 00:15:58.278 Phase Bit: 0 00:15:58.278 Status Code: 0x2 00:15:58.278 Status Code Type: 0x0 00:15:58.278 Do Not Retry: 1 00:15:58.278 Error Location: 0x28 00:15:58.278 LBA: 0x0 00:15:58.278 Namespace: 0x0 00:15:58.278 Vendor Log Page: 0x0 00:15:58.278 ----------- 00:15:58.278 Entry: 1 00:15:58.278 Error Count: 0x2 00:15:58.278 Submission Queue Id: 0x0 00:15:58.278 Command Id: 0x5 00:15:58.278 Phase Bit: 0 00:15:58.278 Status Code: 0x2 00:15:58.278 Status Code Type: 0x0 00:15:58.278 Do Not Retry: 1 00:15:58.278 Error Location: 0x28 00:15:58.278 LBA: 0x0 00:15:58.278 Namespace: 0x0 00:15:58.278 Vendor Log Page: 0x0 00:15:58.278 ----------- 00:15:58.278 Entry: 2 00:15:58.278 Error Count: 0x1 00:15:58.278 Submission Queue Id: 0x0 00:15:58.278 Command Id: 0x4 00:15:58.278 Phase Bit: 0 00:15:58.278 Status Code: 0x2 00:15:58.278 Status Code Type: 0x0 00:15:58.278 Do Not Retry: 1 00:15:58.278 Error Location: 0x28 00:15:58.278 LBA: 0x0 00:15:58.278 Namespace: 0x0 00:15:58.278 Vendor Log Page: 0x0 00:15:58.278 00:15:58.278 Number of Queues 00:15:58.278 ================ 00:15:58.278 Number of I/O Submission Queues: 128 00:15:58.278 Number of I/O Completion Queues: 128 00:15:58.278 00:15:58.278 ZNS Specific Controller Data 00:15:58.278 ============================ 00:15:58.278 Zone Append Size Limit: 0 00:15:58.278 00:15:58.278 00:15:58.278 Active Namespaces 00:15:58.278 ================= 00:15:58.278 get_feature(0x05) failed 00:15:58.278 Namespace ID:1 00:15:58.278 Command Set Identifier: NVM (00h) 00:15:58.278 Deallocate: Supported 00:15:58.278 Deallocated/Unwritten Error: Not Supported 00:15:58.278 Deallocated Read Value: Unknown 00:15:58.278 Deallocate in Write Zeroes: Not Supported 00:15:58.278 Deallocated Guard Field: 0xFFFF 00:15:58.278 Flush: Supported 00:15:58.278 Reservation: Not Supported 00:15:58.278 Namespace Sharing Capabilities: Multiple Controllers 00:15:58.278 Size (in LBAs): 1310720 (5GiB) 00:15:58.278 Capacity (in LBAs): 1310720 (5GiB) 00:15:58.278 Utilization (in LBAs): 1310720 (5GiB) 00:15:58.278 UUID: c24e3d84-ca1e-437f-95e4-1d031844d2b6 00:15:58.278 Thin Provisioning: Not Supported 00:15:58.278 Per-NS Atomic Units: Yes 00:15:58.278 Atomic Boundary Size (Normal): 0 00:15:58.278 Atomic Boundary Size (PFail): 0 00:15:58.278 Atomic Boundary Offset: 0 00:15:58.278 NGUID/EUI64 Never Reused: No 00:15:58.278 ANA group ID: 1 00:15:58.278 Namespace Write Protected: No 00:15:58.278 Number of LBA Formats: 1 00:15:58.278 Current LBA Format: LBA Format #00 00:15:58.278 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:58.278 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:58.278 rmmod nvme_tcp 00:15:58.278 rmmod nvme_fabrics 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:58.278 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:58.537 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:58.537 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:58.538 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:58.797 13:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:59.624 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:59.624 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:59.624 ************************************ 00:15:59.624 END TEST nvmf_identify_kernel_target 00:15:59.624 ************************************ 00:15:59.624 00:15:59.624 real 0m3.359s 00:15:59.624 user 0m1.177s 00:15:59.624 sys 0m1.516s 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.624 ************************************ 00:15:59.624 START TEST nvmf_auth_host 00:15:59.624 ************************************ 00:15:59.624 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:59.883 * Looking for test storage... 00:15:59.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.883 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:59.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.884 --rc genhtml_branch_coverage=1 00:15:59.884 --rc genhtml_function_coverage=1 00:15:59.884 --rc genhtml_legend=1 00:15:59.884 --rc geninfo_all_blocks=1 00:15:59.884 --rc geninfo_unexecuted_blocks=1 00:15:59.884 00:15:59.884 ' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:59.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.884 --rc genhtml_branch_coverage=1 00:15:59.884 --rc genhtml_function_coverage=1 00:15:59.884 --rc genhtml_legend=1 00:15:59.884 --rc geninfo_all_blocks=1 00:15:59.884 --rc geninfo_unexecuted_blocks=1 00:15:59.884 00:15:59.884 ' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:59.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.884 --rc genhtml_branch_coverage=1 00:15:59.884 --rc genhtml_function_coverage=1 00:15:59.884 --rc genhtml_legend=1 00:15:59.884 --rc geninfo_all_blocks=1 00:15:59.884 --rc geninfo_unexecuted_blocks=1 00:15:59.884 00:15:59.884 ' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:59.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.884 --rc genhtml_branch_coverage=1 00:15:59.884 --rc genhtml_function_coverage=1 00:15:59.884 --rc genhtml_legend=1 00:15:59.884 --rc geninfo_all_blocks=1 00:15:59.884 --rc geninfo_unexecuted_blocks=1 00:15:59.884 00:15:59.884 ' 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.884 13:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.884 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:59.885 Cannot find device "nvmf_init_br" 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:59.885 Cannot find device "nvmf_init_br2" 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:59.885 Cannot find device "nvmf_tgt_br" 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.885 Cannot find device "nvmf_tgt_br2" 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:59.885 Cannot find device "nvmf_init_br" 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:15:59.885 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:59.885 Cannot find device "nvmf_init_br2" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:00.144 Cannot find device "nvmf_tgt_br" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:00.144 Cannot find device "nvmf_tgt_br2" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:00.144 Cannot find device "nvmf_br" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:00.144 Cannot find device "nvmf_init_if" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:00.144 Cannot find device "nvmf_init_if2" 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.144 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:00.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:00.403 00:16:00.403 --- 10.0.0.3 ping statistics --- 00:16:00.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.403 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:00.403 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:00.403 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:00.403 00:16:00.403 --- 10.0.0.4 ping statistics --- 00:16:00.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.403 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:00.403 00:16:00.403 --- 10.0.0.1 ping statistics --- 00:16:00.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.403 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:00.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:00.403 00:16:00.403 --- 10.0.0.2 ping statistics --- 00:16:00.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.403 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78093 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78093 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78093 ']' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.403 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:01.024 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b81c35c01e419c91bfda8bba4de9361 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Wob 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b81c35c01e419c91bfda8bba4de9361 0 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b81c35c01e419c91bfda8bba4de9361 0 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b81c35c01e419c91bfda8bba4de9361 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:01.025 13:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Wob 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Wob 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Wob 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d66a4889e8aa8e3f599a1d8e97d0bbca52ab3bda12a4f93a2ae34b97c7f638f 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0uv 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d66a4889e8aa8e3f599a1d8e97d0bbca52ab3bda12a4f93a2ae34b97c7f638f 3 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d66a4889e8aa8e3f599a1d8e97d0bbca52ab3bda12a4f93a2ae34b97c7f638f 3 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d66a4889e8aa8e3f599a1d8e97d0bbca52ab3bda12a4f93a2ae34b97c7f638f 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0uv 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0uv 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0uv 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b743580a19b60690abb2dae5933fb1c793dc21a68ece32c5 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1vU 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b743580a19b60690abb2dae5933fb1c793dc21a68ece32c5 0 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b743580a19b60690abb2dae5933fb1c793dc21a68ece32c5 0 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b743580a19b60690abb2dae5933fb1c793dc21a68ece32c5 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1vU 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1vU 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1vU 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=326696c7396200ab1e1a1f03ac06ea270105d4fcf1e38375 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sLO 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 326696c7396200ab1e1a1f03ac06ea270105d4fcf1e38375 2 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 326696c7396200ab1e1a1f03ac06ea270105d4fcf1e38375 2 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=326696c7396200ab1e1a1f03ac06ea270105d4fcf1e38375 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sLO 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sLO 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sLO 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6e7fc526eabdd57879650c3a0cbcd295 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TSo 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6e7fc526eabdd57879650c3a0cbcd295 1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6e7fc526eabdd57879650c3a0cbcd295 1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6e7fc526eabdd57879650c3a0cbcd295 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:01.025 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TSo 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TSo 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.TSo 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b74e459a5d9970396044b06754ccb333 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0c1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b74e459a5d9970396044b06754ccb333 1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b74e459a5d9970396044b06754ccb333 1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b74e459a5d9970396044b06754ccb333 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0c1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0c1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0c1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79a7e4e26542e3752fe4a08e0217e81acf4b63148bace65f 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RnC 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79a7e4e26542e3752fe4a08e0217e81acf4b63148bace65f 2 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79a7e4e26542e3752fe4a08e0217e81acf4b63148bace65f 2 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79a7e4e26542e3752fe4a08e0217e81acf4b63148bace65f 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RnC 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RnC 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RnC 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e6a1633ea50365bb47668c2fe9c1161 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.S14 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e6a1633ea50365bb47668c2fe9c1161 0 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e6a1633ea50365bb47668c2fe9c1161 0 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e6a1633ea50365bb47668c2fe9c1161 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.S14 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.S14 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.S14 00:16:01.301 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8d2042811bee103a6084f2f7035ceb8f9e03842478c7e7487cde0cf5c25ad7fa 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hry 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8d2042811bee103a6084f2f7035ceb8f9e03842478c7e7487cde0cf5c25ad7fa 3 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8d2042811bee103a6084f2f7035ceb8f9e03842478c7e7487cde0cf5c25ad7fa 3 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8d2042811bee103a6084f2f7035ceb8f9e03842478c7e7487cde0cf5c25ad7fa 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:01.302 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hry 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hry 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hry 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78093 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78093 ']' 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.561 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Wob 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0uv ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0uv 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1vU 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sLO ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sLO 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.TSo 00:16:01.820 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0c1 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0c1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RnC 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.S14 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.S14 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hry 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:01.821 13:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:02.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.080 Waiting for block devices as requested 00:16:02.338 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:02.906 13:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:02.906 No valid GPT data, bailing 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:02.906 No valid GPT data, bailing 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:02.906 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:02.907 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:03.165 No valid GPT data, bailing 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:03.166 No valid GPT data, bailing 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -a 10.0.0.1 -t tcp -s 4420 00:16:03.166 00:16:03.166 Discovery Log Number of Records 2, Generation counter 2 00:16:03.166 =====Discovery Log Entry 0====== 00:16:03.166 trtype: tcp 00:16:03.166 adrfam: ipv4 00:16:03.166 subtype: current discovery subsystem 00:16:03.166 treq: not specified, sq flow control disable supported 00:16:03.166 portid: 1 00:16:03.166 trsvcid: 4420 00:16:03.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:03.166 traddr: 10.0.0.1 00:16:03.166 eflags: none 00:16:03.166 sectype: none 00:16:03.166 =====Discovery Log Entry 1====== 00:16:03.166 trtype: tcp 00:16:03.166 adrfam: ipv4 00:16:03.166 subtype: nvme subsystem 00:16:03.166 treq: not specified, sq flow control disable supported 00:16:03.166 portid: 1 00:16:03.166 trsvcid: 4420 00:16:03.166 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:03.166 traddr: 10.0.0.1 00:16:03.166 eflags: none 00:16:03.166 sectype: none 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:03.166 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.425 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.426 nvme0n1 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.426 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 nvme0n1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 nvme0n1 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 nvme0n1 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.943 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.202 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 nvme0n1 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 nvme0n1 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.462 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.721 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.722 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 nvme0n1 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.981 13:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 nvme0n1 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.981 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:04.982 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.242 nvme0n1 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.242 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 nvme0n1 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:05.501 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.502 nvme0n1 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.502 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:05.763 13:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.331 nvme0n1 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.331 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.590 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.590 nvme0n1 00:16:06.591 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.591 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.591 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.591 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.591 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.850 13:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.850 nvme0n1 00:16:06.850 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.850 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.850 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:06.850 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.850 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.109 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.110 nvme0n1 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.110 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:07.369 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.370 nvme0n1 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.370 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:07.629 13:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.005 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.006 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.006 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.006 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.006 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.006 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.265 nvme0n1 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.265 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.524 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.783 nvme0n1 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:09.783 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.784 13:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 nvme0n1 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.044 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.303 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.304 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.562 nvme0n1 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.562 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.563 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 nvme0n1 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 13:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.822 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.081 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.340 nvme0n1 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.340 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.599 13:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.166 nvme0n1 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.166 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.167 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 nvme0n1 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.734 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.735 13:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.303 nvme0n1 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.303 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 nvme0n1 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 nvme0n1 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.872 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.132 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 nvme0n1 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.133 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.392 nvme0n1 00:16:14.392 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.392 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.392 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.392 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 nvme0n1 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.393 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 nvme0n1 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.653 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.654 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 nvme0n1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.913 13:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.913 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.913 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 nvme0n1 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.914 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.173 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.174 nvme0n1 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.174 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.434 nvme0n1 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.434 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.694 nvme0n1 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.694 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.953 nvme0n1 00:16:15.953 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.953 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.953 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.954 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.954 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.954 13:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.954 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 nvme0n1 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.213 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.214 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.472 nvme0n1 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.473 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 nvme0n1 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.732 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.991 nvme0n1 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.992 13:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.992 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 nvme0n1 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.251 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.252 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.820 nvme0n1 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 13:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.079 nvme0n1 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.079 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.080 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.338 nvme0n1 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.338 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.597 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.598 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.857 nvme0n1 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.857 13:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.424 nvme0n1 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.424 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.425 13:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.991 nvme0n1 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.991 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.992 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.558 nvme0n1 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:20.558 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.559 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.126 nvme0n1 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.126 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.387 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.959 nvme0n1 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.959 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 nvme0n1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.960 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 nvme0n1 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 nvme0n1 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.220 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:22.479 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 nvme0n1 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.480 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.739 nvme0n1 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.739 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.740 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.999 nvme0n1 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 13:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 nvme0n1 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.000 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.260 nvme0n1 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.260 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.261 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 nvme0n1 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:23.520 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.521 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 nvme0n1 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.780 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.039 nvme0n1 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:24.039 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.040 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.298 nvme0n1 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.298 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.556 nvme0n1 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.556 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.557 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.815 nvme0n1 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:24.815 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.816 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.075 nvme0n1 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.075 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.076 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.335 nvme0n1 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:25.335 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.336 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 nvme0n1 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.904 13:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 nvme0n1 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.163 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.164 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.423 nvme0n1 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.423 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.682 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.941 nvme0n1 00:16:26.941 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.942 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.942 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.942 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.942 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.942 13:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI4MWMzNWMwMWU0MTljOTFiZmRhOGJiYTRkZTkzNjHSSmML: 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGQ2NmE0ODg5ZThhYThlM2Y1OTlhMWQ4ZTk3ZDBiYmNhNTJhYjNiZGExMmE0ZjkzYTJhZTM0Yjk3YzdmNjM4ZjXH+Ig=: 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.942 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.510 nvme0n1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.510 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.078 nvme0n1 00:16:28.078 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.078 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.078 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.079 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.647 nvme0n1 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzlhN2U0ZTI2NTQyZTM3NTJmZTRhMDhlMDIxN2U4MWFjZjRiNjMxNDhiYWNlNjVmqVLRRQ==: 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWU2YTE2MzNlYTUwMzY1YmI0NzY2OGMyZmU5YzExNjGz65PD: 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.647 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.215 nvme0n1 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQyMDQyODExYmVlMTAzYTYwODRmMmY3MDM1Y2ViOGY5ZTAzODQyNDc4YzdlNzQ4N2NkZTBjZjVjMjVhZDdmYbjRato=: 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.215 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.783 nvme0n1 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.783 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:30.043 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 request: 00:16:30.044 { 00:16:30.044 "name": "nvme0", 00:16:30.044 "trtype": "tcp", 00:16:30.044 "traddr": "10.0.0.1", 00:16:30.044 "adrfam": "ipv4", 00:16:30.044 "trsvcid": "4420", 00:16:30.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:30.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:30.044 "prchk_reftag": false, 00:16:30.044 "prchk_guard": false, 00:16:30.044 "hdgst": false, 00:16:30.044 "ddgst": false, 00:16:30.044 "allow_unrecognized_csi": false, 00:16:30.044 "method": "bdev_nvme_attach_controller", 00:16:30.044 "req_id": 1 00:16:30.044 } 00:16:30.044 Got JSON-RPC error response 00:16:30.044 response: 00:16:30.044 { 00:16:30.044 "code": -5, 00:16:30.044 "message": "Input/output error" 00:16:30.044 } 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 request: 00:16:30.044 { 00:16:30.044 "name": "nvme0", 00:16:30.044 "trtype": "tcp", 00:16:30.044 "traddr": "10.0.0.1", 00:16:30.044 "adrfam": "ipv4", 00:16:30.044 "trsvcid": "4420", 00:16:30.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:30.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:30.044 "prchk_reftag": false, 00:16:30.044 "prchk_guard": false, 00:16:30.044 "hdgst": false, 00:16:30.044 "ddgst": false, 00:16:30.044 "dhchap_key": "key2", 00:16:30.044 "allow_unrecognized_csi": false, 00:16:30.044 "method": "bdev_nvme_attach_controller", 00:16:30.044 "req_id": 1 00:16:30.044 } 00:16:30.044 Got JSON-RPC error response 00:16:30.044 response: 00:16:30.044 { 00:16:30.044 "code": -5, 00:16:30.044 "message": "Input/output error" 00:16:30.044 } 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.315 request: 00:16:30.315 { 00:16:30.315 "name": "nvme0", 00:16:30.315 "trtype": "tcp", 00:16:30.315 "traddr": "10.0.0.1", 00:16:30.315 "adrfam": "ipv4", 00:16:30.315 "trsvcid": "4420", 00:16:30.315 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:30.315 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:30.315 "prchk_reftag": false, 00:16:30.315 "prchk_guard": false, 00:16:30.315 "hdgst": false, 00:16:30.315 "ddgst": false, 00:16:30.315 "dhchap_key": "key1", 00:16:30.315 "dhchap_ctrlr_key": "ckey2", 00:16:30.315 "allow_unrecognized_csi": false, 00:16:30.315 "method": "bdev_nvme_attach_controller", 00:16:30.315 "req_id": 1 00:16:30.315 } 00:16:30.315 Got JSON-RPC error response 00:16:30.315 response: 00:16:30.315 { 00:16:30.315 "code": -5, 00:16:30.316 "message": "Input/output error" 00:16:30.316 } 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.316 nvme0n1 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.316 request: 00:16:30.316 { 00:16:30.316 "name": "nvme0", 00:16:30.316 "dhchap_key": "key1", 00:16:30.316 "dhchap_ctrlr_key": "ckey2", 00:16:30.316 "method": "bdev_nvme_set_keys", 00:16:30.316 "req_id": 1 00:16:30.316 } 00:16:30.316 Got JSON-RPC error response 00:16:30.316 response: 00:16:30.316 { 00:16:30.316 "code": -13, 00:16:30.316 "message": "Permission denied" 00:16:30.316 } 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.316 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.597 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:16:30.597 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc0MzU4MGExOWI2MDY5MGFiYjJkYWU1OTMzZmIxYzc5M2RjMjFhNjhlY2UzMmM1IiCn4g==: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzI2Njk2YzczOTYyMDBhYjFlMWExZjAzYWMwNmVhMjcwMTA1ZDRmY2YxZTM4Mzc15vVVeA==: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.542 nvme0n1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmU3ZmM1MjZlYWJkZDU3ODc5NjUwYzNhMGNiY2QyOTVPAblL: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjc0ZTQ1OWE1ZDk5NzAzOTYwNDRiMDY3NTRjY2IzMzOhTpsp: 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.542 request: 00:16:31.542 { 00:16:31.542 "name": "nvme0", 00:16:31.542 "dhchap_key": "key2", 00:16:31.542 "dhchap_ctrlr_key": "ckey1", 00:16:31.542 "method": "bdev_nvme_set_keys", 00:16:31.542 "req_id": 1 00:16:31.542 } 00:16:31.542 Got JSON-RPC error response 00:16:31.542 response: 00:16:31.542 { 00:16:31.542 "code": -13, 00:16:31.542 "message": "Permission denied" 00:16:31.542 } 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:31.542 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:31.543 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:31.543 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.543 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.543 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:31.543 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.802 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.802 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:16:31.802 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:16:32.739 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.740 rmmod nvme_tcp 00:16:32.740 rmmod nvme_fabrics 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78093 ']' 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78093 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78093 ']' 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78093 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.740 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78093 00:16:32.998 killing process with pid 78093 00:16:32.998 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.998 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.998 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78093' 00:16:32.998 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78093 00:16:32.998 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78093 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:32.999 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:33.257 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:33.258 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:34.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.194 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.194 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Wob /tmp/spdk.key-null.1vU /tmp/spdk.key-sha256.TSo /tmp/spdk.key-sha384.RnC /tmp/spdk.key-sha512.hry /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:34.194 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:34.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.762 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.762 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:34.762 00:16:34.762 real 0m34.989s 00:16:34.762 user 0m32.268s 00:16:34.762 sys 0m3.845s 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.762 ************************************ 00:16:34.762 END TEST nvmf_auth_host 00:16:34.762 ************************************ 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.762 ************************************ 00:16:34.762 START TEST nvmf_digest 00:16:34.762 ************************************ 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:34.762 * Looking for test storage... 00:16:34.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.762 13:26:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.022 --rc genhtml_branch_coverage=1 00:16:35.022 --rc genhtml_function_coverage=1 00:16:35.022 --rc genhtml_legend=1 00:16:35.022 --rc geninfo_all_blocks=1 00:16:35.022 --rc geninfo_unexecuted_blocks=1 00:16:35.022 00:16:35.022 ' 00:16:35.022 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.022 --rc genhtml_branch_coverage=1 00:16:35.022 --rc genhtml_function_coverage=1 00:16:35.022 --rc genhtml_legend=1 00:16:35.022 --rc geninfo_all_blocks=1 00:16:35.022 --rc geninfo_unexecuted_blocks=1 00:16:35.022 00:16:35.023 ' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:35.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.023 --rc genhtml_branch_coverage=1 00:16:35.023 --rc genhtml_function_coverage=1 00:16:35.023 --rc genhtml_legend=1 00:16:35.023 --rc geninfo_all_blocks=1 00:16:35.023 --rc geninfo_unexecuted_blocks=1 00:16:35.023 00:16:35.023 ' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:35.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.023 --rc genhtml_branch_coverage=1 00:16:35.023 --rc genhtml_function_coverage=1 00:16:35.023 --rc genhtml_legend=1 00:16:35.023 --rc geninfo_all_blocks=1 00:16:35.023 --rc geninfo_unexecuted_blocks=1 00:16:35.023 00:16:35.023 ' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:35.023 Cannot find device "nvmf_init_br" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:35.023 Cannot find device "nvmf_init_br2" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:35.023 Cannot find device "nvmf_tgt_br" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.023 Cannot find device "nvmf_tgt_br2" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:35.023 Cannot find device "nvmf_init_br" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:35.023 Cannot find device "nvmf_init_br2" 00:16:35.023 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:35.024 Cannot find device "nvmf_tgt_br" 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:35.024 Cannot find device "nvmf_tgt_br2" 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:35.024 Cannot find device "nvmf_br" 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:35.024 Cannot find device "nvmf_init_if" 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:35.024 Cannot find device "nvmf_init_if2" 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.024 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.283 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:35.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:35.284 00:16:35.284 --- 10.0.0.3 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:35.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:16:35.284 00:16:35.284 --- 10.0.0.4 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:35.284 00:16:35.284 --- 10.0.0.1 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:35.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:35.284 00:16:35.284 --- 10.0.0.2 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.284 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:35.543 ************************************ 00:16:35.543 START TEST nvmf_digest_clean 00:16:35.543 ************************************ 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79711 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79711 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79711 ']' 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.543 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.543 [2024-11-17 13:26:24.572992] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:35.543 [2024-11-17 13:26:24.573091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.543 [2024-11-17 13:26:24.726594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.802 [2024-11-17 13:26:24.779474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.802 [2024-11-17 13:26:24.779546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.802 [2024-11-17 13:26:24.779561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.802 [2024-11-17 13:26:24.779572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.802 [2024-11-17 13:26:24.779581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.802 [2024-11-17 13:26:24.780029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.802 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.803 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:35.803 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:35.803 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:35.803 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.803 13:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:35.803 [2024-11-17 13:26:24.931160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.803 null0 00:16:35.803 [2024-11-17 13:26:24.987491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.803 [2024-11-17 13:26:25.011624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79737 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79737 /var/tmp/bperf.sock 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79737 ']' 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.803 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:36.061 [2024-11-17 13:26:25.076323] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:36.061 [2024-11-17 13:26:25.076448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79737 ] 00:16:36.061 [2024-11-17 13:26:25.226043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.061 [2024-11-17 13:26:25.277627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.320 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.320 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:36.320 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:36.320 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:36.320 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:36.579 [2024-11-17 13:26:25.552739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.579 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.579 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.837 nvme0n1 00:16:36.837 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:36.837 13:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:37.096 Running I/O for 2 seconds... 00:16:38.968 18669.00 IOPS, 72.93 MiB/s [2024-11-17T13:26:28.192Z] 18669.00 IOPS, 72.93 MiB/s 00:16:38.968 Latency(us) 00:16:38.968 [2024-11-17T13:26:28.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.968 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:38.968 nvme0n1 : 2.01 18702.27 73.06 0.00 0.00 6839.55 6404.65 15192.44 00:16:38.968 [2024-11-17T13:26:28.192Z] =================================================================================================================== 00:16:38.968 [2024-11-17T13:26:28.192Z] Total : 18702.27 73.06 0.00 0.00 6839.55 6404.65 15192.44 00:16:38.968 { 00:16:38.968 "results": [ 00:16:38.968 { 00:16:38.968 "job": "nvme0n1", 00:16:38.968 "core_mask": "0x2", 00:16:38.968 "workload": "randread", 00:16:38.968 "status": "finished", 00:16:38.968 "queue_depth": 128, 00:16:38.968 "io_size": 4096, 00:16:38.968 "runtime": 2.010077, 00:16:38.968 "iops": 18702.268619560346, 00:16:38.968 "mibps": 73.0557367951576, 00:16:38.968 "io_failed": 0, 00:16:38.968 "io_timeout": 0, 00:16:38.968 "avg_latency_us": 6839.5512626867185, 00:16:38.968 "min_latency_us": 6404.654545454546, 00:16:38.968 "max_latency_us": 15192.436363636363 00:16:38.968 } 00:16:38.968 ], 00:16:38.968 "core_count": 1 00:16:38.968 } 00:16:38.968 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:38.968 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:38.968 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:38.968 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:38.968 | select(.opcode=="crc32c") 00:16:38.968 | "\(.module_name) \(.executed)"' 00:16:38.968 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:39.226 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:39.226 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:39.226 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:39.226 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79737 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79737 ']' 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79737 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.227 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79737 00:16:39.485 killing process with pid 79737 00:16:39.485 Received shutdown signal, test time was about 2.000000 seconds 00:16:39.485 00:16:39.485 Latency(us) 00:16:39.485 [2024-11-17T13:26:28.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.485 [2024-11-17T13:26:28.709Z] =================================================================================================================== 00:16:39.485 [2024-11-17T13:26:28.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79737' 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79737 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79737 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:39.485 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79786 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79786 /var/tmp/bperf.sock 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79786 ']' 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:39.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.486 13:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.486 [2024-11-17 13:26:28.698481] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:39.486 [2024-11-17 13:26:28.698830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79786 ] 00:16:39.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:39.486 Zero copy mechanism will not be used. 00:16:39.745 [2024-11-17 13:26:28.841648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.745 [2024-11-17 13:26:28.888602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.681 13:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.681 13:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:40.681 13:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:40.681 13:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:40.681 13:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:40.939 [2024-11-17 13:26:29.958589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.939 13:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.939 13:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:41.198 nvme0n1 00:16:41.198 13:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:41.198 13:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:41.457 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:41.457 Zero copy mechanism will not be used. 00:16:41.457 Running I/O for 2 seconds... 00:16:43.329 7984.00 IOPS, 998.00 MiB/s [2024-11-17T13:26:32.553Z] 7952.00 IOPS, 994.00 MiB/s 00:16:43.329 Latency(us) 00:16:43.329 [2024-11-17T13:26:32.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:43.329 nvme0n1 : 2.00 7951.84 993.98 0.00 0.00 2009.42 1906.50 3381.06 00:16:43.329 [2024-11-17T13:26:32.553Z] =================================================================================================================== 00:16:43.329 [2024-11-17T13:26:32.553Z] Total : 7951.84 993.98 0.00 0.00 2009.42 1906.50 3381.06 00:16:43.329 { 00:16:43.329 "results": [ 00:16:43.329 { 00:16:43.329 "job": "nvme0n1", 00:16:43.329 "core_mask": "0x2", 00:16:43.329 "workload": "randread", 00:16:43.329 "status": "finished", 00:16:43.329 "queue_depth": 16, 00:16:43.329 "io_size": 131072, 00:16:43.329 "runtime": 2.002052, 00:16:43.329 "iops": 7951.841410712609, 00:16:43.329 "mibps": 993.9801763390761, 00:16:43.329 "io_failed": 0, 00:16:43.329 "io_timeout": 0, 00:16:43.329 "avg_latency_us": 2009.41639104614, 00:16:43.329 "min_latency_us": 1906.5018181818182, 00:16:43.329 "max_latency_us": 3381.061818181818 00:16:43.329 } 00:16:43.329 ], 00:16:43.329 "core_count": 1 00:16:43.329 } 00:16:43.329 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:43.329 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:43.329 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:43.329 | select(.opcode=="crc32c") 00:16:43.329 | "\(.module_name) \(.executed)"' 00:16:43.329 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:43.329 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79786 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79786 ']' 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79786 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.587 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79786 00:16:43.846 killing process with pid 79786 00:16:43.846 Received shutdown signal, test time was about 2.000000 seconds 00:16:43.846 00:16:43.846 Latency(us) 00:16:43.846 [2024-11-17T13:26:33.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.846 [2024-11-17T13:26:33.070Z] =================================================================================================================== 00:16:43.846 [2024-11-17T13:26:33.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79786' 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79786 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79786 00:16:43.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79852 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79852 /var/tmp/bperf.sock 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79852 ']' 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.846 13:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:43.846 [2024-11-17 13:26:33.036405] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:43.846 [2024-11-17 13:26:33.036652] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79852 ] 00:16:44.104 [2024-11-17 13:26:33.167673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.104 [2024-11-17 13:26:33.207929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.104 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.104 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:44.104 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:44.104 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:44.104 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:44.671 [2024-11-17 13:26:33.598083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:44.671 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.671 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.930 nvme0n1 00:16:44.930 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:44.930 13:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:44.930 Running I/O for 2 seconds... 00:16:47.245 20067.00 IOPS, 78.39 MiB/s [2024-11-17T13:26:36.469Z] 20130.00 IOPS, 78.63 MiB/s 00:16:47.245 Latency(us) 00:16:47.245 [2024-11-17T13:26:36.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.245 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.245 nvme0n1 : 2.01 20138.50 78.67 0.00 0.00 6346.06 3425.75 13285.93 00:16:47.245 [2024-11-17T13:26:36.469Z] =================================================================================================================== 00:16:47.245 [2024-11-17T13:26:36.469Z] Total : 20138.50 78.67 0.00 0.00 6346.06 3425.75 13285.93 00:16:47.245 { 00:16:47.245 "results": [ 00:16:47.245 { 00:16:47.245 "job": "nvme0n1", 00:16:47.245 "core_mask": "0x2", 00:16:47.245 "workload": "randwrite", 00:16:47.245 "status": "finished", 00:16:47.245 "queue_depth": 128, 00:16:47.245 "io_size": 4096, 00:16:47.245 "runtime": 2.005512, 00:16:47.245 "iops": 20138.498298688814, 00:16:47.245 "mibps": 78.66600897925318, 00:16:47.245 "io_failed": 0, 00:16:47.245 "io_timeout": 0, 00:16:47.245 "avg_latency_us": 6346.0591794142265, 00:16:47.245 "min_latency_us": 3425.7454545454543, 00:16:47.245 "max_latency_us": 13285.934545454546 00:16:47.245 } 00:16:47.245 ], 00:16:47.245 "core_count": 1 00:16:47.245 } 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:47.245 | select(.opcode=="crc32c") 00:16:47.245 | "\(.module_name) \(.executed)"' 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79852 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79852 ']' 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79852 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79852 00:16:47.245 killing process with pid 79852 00:16:47.245 Received shutdown signal, test time was about 2.000000 seconds 00:16:47.245 00:16:47.245 Latency(us) 00:16:47.245 [2024-11-17T13:26:36.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.245 [2024-11-17T13:26:36.469Z] =================================================================================================================== 00:16:47.245 [2024-11-17T13:26:36.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79852' 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79852 00:16:47.245 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79852 00:16:47.504 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:47.504 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79900 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79900 /var/tmp/bperf.sock 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79900 ']' 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:47.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.505 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:47.505 [2024-11-17 13:26:36.635677] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:47.505 [2024-11-17 13:26:36.635943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:47.505 Zero copy mechanism will not be used. 00:16:47.505 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79900 ] 00:16:47.763 [2024-11-17 13:26:36.766931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.763 [2024-11-17 13:26:36.807327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.763 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.763 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:47.763 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:47.763 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:47.763 13:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:48.022 [2024-11-17 13:26:37.125288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.022 13:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.022 13:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.590 nvme0n1 00:16:48.590 13:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:48.590 13:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:48.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:48.590 Zero copy mechanism will not be used. 00:16:48.590 Running I/O for 2 seconds... 00:16:50.462 6828.00 IOPS, 853.50 MiB/s [2024-11-17T13:26:39.686Z] 6826.50 IOPS, 853.31 MiB/s 00:16:50.462 Latency(us) 00:16:50.462 [2024-11-17T13:26:39.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.462 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:50.462 nvme0n1 : 2.00 6822.86 852.86 0.00 0.00 2340.12 1623.51 3768.32 00:16:50.462 [2024-11-17T13:26:39.686Z] =================================================================================================================== 00:16:50.462 [2024-11-17T13:26:39.686Z] Total : 6822.86 852.86 0.00 0.00 2340.12 1623.51 3768.32 00:16:50.462 { 00:16:50.462 "results": [ 00:16:50.462 { 00:16:50.462 "job": "nvme0n1", 00:16:50.462 "core_mask": "0x2", 00:16:50.462 "workload": "randwrite", 00:16:50.462 "status": "finished", 00:16:50.462 "queue_depth": 16, 00:16:50.462 "io_size": 131072, 00:16:50.462 "runtime": 2.003412, 00:16:50.462 "iops": 6822.86020049795, 00:16:50.462 "mibps": 852.8575250622438, 00:16:50.462 "io_failed": 0, 00:16:50.462 "io_timeout": 0, 00:16:50.462 "avg_latency_us": 2340.1162625449756, 00:16:50.462 "min_latency_us": 1623.5054545454545, 00:16:50.462 "max_latency_us": 3768.32 00:16:50.462 } 00:16:50.462 ], 00:16:50.462 "core_count": 1 00:16:50.462 } 00:16:50.462 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:50.462 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:50.462 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:50.462 | select(.opcode=="crc32c") 00:16:50.462 | "\(.module_name) \(.executed)"' 00:16:50.462 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:50.462 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79900 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79900 ']' 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79900 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79900 00:16:51.031 killing process with pid 79900 00:16:51.031 Received shutdown signal, test time was about 2.000000 seconds 00:16:51.031 00:16:51.031 Latency(us) 00:16:51.031 [2024-11-17T13:26:40.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.031 [2024-11-17T13:26:40.255Z] =================================================================================================================== 00:16:51.031 [2024-11-17T13:26:40.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79900' 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79900 00:16:51.031 13:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79900 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79711 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79711 ']' 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79711 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79711 00:16:51.031 killing process with pid 79711 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79711' 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79711 00:16:51.031 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79711 00:16:51.290 00:16:51.290 real 0m15.945s 00:16:51.290 user 0m30.216s 00:16:51.290 sys 0m5.261s 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.290 ************************************ 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:51.290 END TEST nvmf_digest_clean 00:16:51.290 ************************************ 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:51.290 ************************************ 00:16:51.290 START TEST nvmf_digest_error 00:16:51.290 ************************************ 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.290 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.549 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79977 00:16:51.549 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79977 00:16:51.549 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:51.549 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79977 ']' 00:16:51.549 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.550 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.550 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.550 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.550 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.550 [2024-11-17 13:26:40.579459] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:51.550 [2024-11-17 13:26:40.579556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.550 [2024-11-17 13:26:40.726526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.550 [2024-11-17 13:26:40.767122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.550 [2024-11-17 13:26:40.767190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.550 [2024-11-17 13:26:40.767215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.550 [2024-11-17 13:26:40.767223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.550 [2024-11-17 13:26:40.767229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.550 [2024-11-17 13:26:40.767613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.809 [2024-11-17 13:26:40.848053] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.809 13:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:51.809 [2024-11-17 13:26:40.926117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.809 null0 00:16:51.809 [2024-11-17 13:26:40.985730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.809 [2024-11-17 13:26:41.009917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80000 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80000 /var/tmp/bperf.sock 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80000 ']' 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:51.809 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.810 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:51.810 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.810 13:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:52.068 [2024-11-17 13:26:41.075041] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:52.068 [2024-11-17 13:26:41.075328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80000 ] 00:16:52.068 [2024-11-17 13:26:41.217996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.068 [2024-11-17 13:26:41.259098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.327 [2024-11-17 13:26:41.310505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.895 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.895 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:52.895 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:52.895 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.154 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.412 nvme0n1 00:16:53.412 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:53.412 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.412 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:53.412 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.412 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:53.413 13:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:53.671 Running I/O for 2 seconds... 00:16:53.671 [2024-11-17 13:26:42.696300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.696346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.710298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.710333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.710346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.723964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.724000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.724027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.737680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.737715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.737727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.751426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.751472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.765118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.765155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.765166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.778779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.778812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.778824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.792353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.671 [2024-11-17 13:26:42.792388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.671 [2024-11-17 13:26:42.792421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.671 [2024-11-17 13:26:42.805983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.806018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.806029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.819555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.819590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.833252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.833440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.833457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.846993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.847027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.847038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.860640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.860673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.874432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.874468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.874479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.672 [2024-11-17 13:26:42.888180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.672 [2024-11-17 13:26:42.888215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.672 [2024-11-17 13:26:42.888226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.902322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.902368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.916049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.916110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.929659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.943380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.943428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.943439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.957068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.957100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.957111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.970626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.970658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.970670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.984365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.984404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.984432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:42.998025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:42.998061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:42.998072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.011564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.011599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.025484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.025520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.025531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.039074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.039108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.039119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.052568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.052602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.052628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.066230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.066264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.066275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.079772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.079807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.093469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.093503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.093513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.107101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.107134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.107145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.120932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.121168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.121184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.136045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.136083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.136098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.931 [2024-11-17 13:26:43.150741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:53.931 [2024-11-17 13:26:43.150800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.931 [2024-11-17 13:26:43.150828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.190 [2024-11-17 13:26:43.165174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.190 [2024-11-17 13:26:43.165207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.190 [2024-11-17 13:26:43.165234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.190 [2024-11-17 13:26:43.178930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.190 [2024-11-17 13:26:43.178963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.190 [2024-11-17 13:26:43.178990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.190 [2024-11-17 13:26:43.192715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.190 [2024-11-17 13:26:43.192750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.190 [2024-11-17 13:26:43.192787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.206291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.206324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.206350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.219962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.219995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.233551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.233584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.233611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.247126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.247160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.247186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.260705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.260740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.260766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.274309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.274342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.274368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.287961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.288021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.301535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.301568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.301594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.315124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.315157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.315184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.328969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.329004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.329032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.343387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.343421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.343448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.357659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.357691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.357718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.371621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.371653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.371680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.385624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.385682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.191 [2024-11-17 13:26:43.399223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.191 [2024-11-17 13:26:43.399253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.191 [2024-11-17 13:26:43.399280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.450 [2024-11-17 13:26:43.413290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.450 [2024-11-17 13:26:43.413321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.450 [2024-11-17 13:26:43.413348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.450 [2024-11-17 13:26:43.427367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.450 [2024-11-17 13:26:43.427398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.450 [2024-11-17 13:26:43.427425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.450 [2024-11-17 13:26:43.441246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.450 [2024-11-17 13:26:43.441277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.450 [2024-11-17 13:26:43.441304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.450 [2024-11-17 13:26:43.455268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.450 [2024-11-17 13:26:43.455299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.450 [2024-11-17 13:26:43.455325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.450 [2024-11-17 13:26:43.468977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.469008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.469035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.482537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.482569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.482595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.496350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.496381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.496416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.510027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.510059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.523880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.523925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.523936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.537421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.537450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.537460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.550967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.550997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.551006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.570546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.570576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.570586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.584590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.584637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.584647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.598371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.598401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.598412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.612161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.612192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.612202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.625961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.625991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.626001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.639673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.639703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.639713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.653552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.653583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.653593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.451 [2024-11-17 13:26:43.667290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.451 [2024-11-17 13:26:43.667320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.451 [2024-11-17 13:26:43.667330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.710 18217.00 IOPS, 71.16 MiB/s [2024-11-17T13:26:43.934Z] [2024-11-17 13:26:43.681420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.710 [2024-11-17 13:26:43.681452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.710 [2024-11-17 13:26:43.681462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.695135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.695165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.708802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.708831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.708841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.722469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.722499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.736076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.736123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.736133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.749662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.749692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.749702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.763277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.763307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.763317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.776862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.776892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.776902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.790431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.790461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.790470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.804060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.804118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.804130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.817739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.817778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.817788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.831288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.831318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.831327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.844797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.844826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.844836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.858345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.858375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.858384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.871884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.871929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.871940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.885461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.885491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.885501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.899155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.899185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.899195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.912825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.912853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.912863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.711 [2024-11-17 13:26:43.926509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.711 [2024-11-17 13:26:43.926539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.711 [2024-11-17 13:26:43.926549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.970 [2024-11-17 13:26:43.940663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.970 [2024-11-17 13:26:43.940710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.970 [2024-11-17 13:26:43.940741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.970 [2024-11-17 13:26:43.954319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.970 [2024-11-17 13:26:43.954349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.970 [2024-11-17 13:26:43.954359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:43.967987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:43.968033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:43.968043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:43.981727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:43.981765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:43.981776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:43.995420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:43.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:43.995459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.009043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.009072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.009082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.022594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.022627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.022637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.036291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.036322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.036332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.049853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.049881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.049891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.063416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.063446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.063456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.076995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.090579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.090609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.090619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.104204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.104233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.104243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.117795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.117825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.117834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.131371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.131402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.145355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.145401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.145411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.160532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.160583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.160596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.175070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.175117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.175143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:54.971 [2024-11-17 13:26:44.188818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:54.971 [2024-11-17 13:26:44.188863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.971 [2024-11-17 13:26:44.188873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.202989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.203019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.203029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.216514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.216560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.216571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.230138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.230167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.230177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.243597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.243626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.243636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.257181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.257210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.257220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.270752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.270798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.284490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.284537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.284548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.298175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.298205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.298214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.311790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.311820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.311829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.325299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.325340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.325350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.338857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.338886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.338896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.352338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.352368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.352377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.365883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.365912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.365922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.379339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.379368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.379378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.392886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.392931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.392941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.406402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.406432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.406442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.419894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.419923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.419933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.231 [2024-11-17 13:26:44.433307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.231 [2024-11-17 13:26:44.433336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.231 [2024-11-17 13:26:44.433346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.452985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.453031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.453041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.466854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.466884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.480780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.480825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.480836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.494398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.494428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.494438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.507939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.507970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.507980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.521500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.521530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.521540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.535747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.535819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.535841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.550478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.550524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.550535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.564605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.564652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.564664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.578357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.578403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.578413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.591908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.591955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.491 [2024-11-17 13:26:44.591965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.491 [2024-11-17 13:26:44.605678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.491 [2024-11-17 13:26:44.605724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 [2024-11-17 13:26:44.619276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.492 [2024-11-17 13:26:44.619322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.619332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 [2024-11-17 13:26:44.633018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.492 [2024-11-17 13:26:44.633065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.633075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 [2024-11-17 13:26:44.646755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.492 [2024-11-17 13:26:44.646806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.646816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 [2024-11-17 13:26:44.660343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.492 [2024-11-17 13:26:44.660389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.660408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 18343.50 IOPS, 71.65 MiB/s [2024-11-17T13:26:44.716Z] [2024-11-17 13:26:44.673924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfd370) 00:16:55.492 [2024-11-17 13:26:44.673970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.492 [2024-11-17 13:26:44.673980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.492 00:16:55.492 Latency(us) 00:16:55.492 [2024-11-17T13:26:44.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.492 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:55.492 nvme0n1 : 2.01 18355.02 71.70 0.00 0.00 6968.30 6374.87 26929.34 00:16:55.492 [2024-11-17T13:26:44.716Z] =================================================================================================================== 00:16:55.492 [2024-11-17T13:26:44.716Z] Total : 18355.02 71.70 0.00 0.00 6968.30 6374.87 26929.34 00:16:55.492 { 00:16:55.492 "results": [ 00:16:55.492 { 00:16:55.492 "job": "nvme0n1", 00:16:55.492 "core_mask": "0x2", 00:16:55.492 "workload": "randread", 00:16:55.492 "status": "finished", 00:16:55.492 "queue_depth": 128, 00:16:55.492 "io_size": 4096, 00:16:55.492 "runtime": 2.005718, 00:16:55.492 "iops": 18355.022989273668, 00:16:55.492 "mibps": 71.69930855185027, 00:16:55.492 "io_failed": 0, 00:16:55.492 "io_timeout": 0, 00:16:55.492 "avg_latency_us": 6968.299910165076, 00:16:55.492 "min_latency_us": 6374.865454545455, 00:16:55.492 "max_latency_us": 26929.33818181818 00:16:55.492 } 00:16:55.492 ], 00:16:55.492 "core_count": 1 00:16:55.492 } 00:16:55.492 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:55.492 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:55.492 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:55.492 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:55.492 | .driver_specific 00:16:55.492 | .nvme_error 00:16:55.492 | .status_code 00:16:55.492 | .command_transient_transport_error' 00:16:56.059 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:16:56.059 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80000 00:16:56.060 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80000 ']' 00:16:56.060 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80000 00:16:56.060 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:56.060 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.060 13:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80000 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.060 killing process with pid 80000 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80000' 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80000 00:16:56.060 Received shutdown signal, test time was about 2.000000 seconds 00:16:56.060 00:16:56.060 Latency(us) 00:16:56.060 [2024-11-17T13:26:45.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.060 [2024-11-17T13:26:45.284Z] =================================================================================================================== 00:16:56.060 [2024-11-17T13:26:45.284Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80000 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80062 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80062 /var/tmp/bperf.sock 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80062 ']' 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.060 13:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.060 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:56.060 Zero copy mechanism will not be used. 00:16:56.060 [2024-11-17 13:26:45.242643] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:56.060 [2024-11-17 13:26:45.242746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80062 ] 00:16:56.318 [2024-11-17 13:26:45.381178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.318 [2024-11-17 13:26:45.428011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.318 [2024-11-17 13:26:45.478645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:57.254 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:57.513 nvme0n1 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:57.772 13:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:57.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:57.772 Zero copy mechanism will not be used. 00:16:57.772 Running I/O for 2 seconds... 00:16:57.772 [2024-11-17 13:26:46.850071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.772 [2024-11-17 13:26:46.850115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.772 [2024-11-17 13:26:46.850128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.772 [2024-11-17 13:26:46.854367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.772 [2024-11-17 13:26:46.854400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.772 [2024-11-17 13:26:46.854410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.772 [2024-11-17 13:26:46.858532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.772 [2024-11-17 13:26:46.858563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.772 [2024-11-17 13:26:46.858574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.862808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.862856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.862867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.866948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.866995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.867007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.871131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.871163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.875261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.875292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.875303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.879451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.879483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.879494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.883633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.883665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.883676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.887841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.887872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.887882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.891925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.891955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.891966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.896137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.896183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.896194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.900333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.900363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.900373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.904551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.904583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.904595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.908930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.908977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.908988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.913496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.913544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.913556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.917946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.917977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.917990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.922387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.922435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.922446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.926923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.926973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.926985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.931514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.931561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.931572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.936000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.936049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.936061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.940367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.940436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.940449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.944647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.944696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.944708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.949291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.949339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.949350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.953542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.953589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.953600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.957847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.957893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.957904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.962052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.962098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.962109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.966306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.966353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.966364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.970647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.970694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.773 [2024-11-17 13:26:46.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.773 [2024-11-17 13:26:46.975068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.773 [2024-11-17 13:26:46.975114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.774 [2024-11-17 13:26:46.975125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.774 [2024-11-17 13:26:46.979315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.774 [2024-11-17 13:26:46.979362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.774 [2024-11-17 13:26:46.979373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.774 [2024-11-17 13:26:46.983509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.774 [2024-11-17 13:26:46.983557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.774 [2024-11-17 13:26:46.983568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:57.774 [2024-11-17 13:26:46.987794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.774 [2024-11-17 13:26:46.987838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.774 [2024-11-17 13:26:46.987851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:57.774 [2024-11-17 13:26:46.992304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:57.774 [2024-11-17 13:26:46.992349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.774 [2024-11-17 13:26:46.992360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:46.996812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:46.996859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:46.996871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.001214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.001244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.001256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.005565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.005613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.005624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.009785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.009833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.009844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.014065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.014104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.018375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.018422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.018434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.022739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.022796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.022808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.026944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.026989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.027000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.031356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.034 [2024-11-17 13:26:47.031407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.034 [2024-11-17 13:26:47.031418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.034 [2024-11-17 13:26:47.035493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.035551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.039686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.039733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.039744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.044024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.044071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.044082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.048201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.048247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.048258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.052447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.052493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.052504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.056851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.056896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.056907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.061021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.061066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.061077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.065226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.065273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.065284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.069666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.069712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.074082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.074130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.074156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.078291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.078323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.078334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.082512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.082543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.082553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.086696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.086727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.086737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.090889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.090920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.090930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.095040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.095071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.095082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.099149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.099180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.099191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.103280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.103311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.103322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.107418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.107459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.111625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.111656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.111666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.115783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.115813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.115824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.119866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.119911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.123966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.123997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.124007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.128013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.128042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.128053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.132255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.132302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.132313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.136468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.136514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.136526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.140613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.140661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.140672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.035 [2024-11-17 13:26:47.144859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.035 [2024-11-17 13:26:47.144889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.035 [2024-11-17 13:26:47.144900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.148893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.148924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.148934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.153041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.153083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.157114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.157145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.157155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.161342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.161372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.161383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.165418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.165449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.169540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.169570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.169581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.173650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.173692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.177850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.177896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.177907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.182229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.182270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.186344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.186375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.186385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.190467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.190497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.190508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.194562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.194593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.198669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.198700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.202817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.202847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.202858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.206860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.206889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.206900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.211036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.211083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.211094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.215496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.215541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.215552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.219905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.219951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.219962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.224428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.224459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.224471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.229205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.229233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.229245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.233909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.233960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.233972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.238485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.238517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.238529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.242937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.242987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.242999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.247430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.247461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.247472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.036 [2024-11-17 13:26:47.251871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.036 [2024-11-17 13:26:47.251918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.036 [2024-11-17 13:26:47.251929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.256452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.256505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.256516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.260735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.260778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.260792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.264933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.264964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.264975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.269009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.269039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.274992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.275039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.275050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.279211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.279242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.279253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.283349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.283380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.283390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.287443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.287472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.287483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.291567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.291597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.291608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.295642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.298 [2024-11-17 13:26:47.295673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.298 [2024-11-17 13:26:47.295684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.298 [2024-11-17 13:26:47.299811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.299841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.299851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.303947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.303977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.303988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.307992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.308037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.308048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.312145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.312189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.316247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.316280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.316291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.320437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.320467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.320478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.324573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.324620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.324632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.328802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.328832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.328843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.332906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.332936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.332946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.337048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.337078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.337090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.341238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.341268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.341279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.345354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.345386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.345396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.349458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.349489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.349499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.353585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.353616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.353627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.357738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.357777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.357788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.361747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.361785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.361795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.365941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.365972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.365983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.369982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.370012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.370022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.374020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.374051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.374061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.378086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.378117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.378127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.382144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.382174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.382184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.386317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.386359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.390463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.390494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.394570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.299 [2024-11-17 13:26:47.394601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.299 [2024-11-17 13:26:47.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.299 [2024-11-17 13:26:47.398712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.398743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.398754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.402897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.402943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.402953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.407034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.407080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.407091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.411221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.411251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.411261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.415436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.415482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.415493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.419615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.419645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.419656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.423856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.423885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.423896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.427987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.428017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.428027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.432097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.432126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.432137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.436277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.436306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.436317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.440431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.440478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.440490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.444662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.444711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.444722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.448918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.448959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.452983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.453013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.453024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.457074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.457105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.461215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.461246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.461256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.465345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.469493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.469524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.469535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.473664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.473694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.473704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.477801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.477831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.477842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.481907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.481947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.485939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.485969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.485980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.490036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.490067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.490078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.494182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.494212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.494223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.300 [2024-11-17 13:26:47.498228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.300 [2024-11-17 13:26:47.498258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.300 [2024-11-17 13:26:47.498269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.301 [2024-11-17 13:26:47.502329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.301 [2024-11-17 13:26:47.502360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.301 [2024-11-17 13:26:47.502370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.301 [2024-11-17 13:26:47.506460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.301 [2024-11-17 13:26:47.506489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.301 [2024-11-17 13:26:47.506500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.301 [2024-11-17 13:26:47.510561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.301 [2024-11-17 13:26:47.510591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.301 [2024-11-17 13:26:47.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.301 [2024-11-17 13:26:47.514880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.301 [2024-11-17 13:26:47.514927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.301 [2024-11-17 13:26:47.514937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.519346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.519377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.519387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.523513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.523544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.523555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.527755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.527796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.527806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.531911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.535997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.536041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.536052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.540144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.540190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.540202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.544283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.544328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.544339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.548521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.548553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.548563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.552615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.552661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.552672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.556728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.556768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.556780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.560867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.560897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.560908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.564913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.564943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.564953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.568973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.569004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.569014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.573119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.573149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.573159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.577187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.577218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.577229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.581351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.581382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.581392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.585445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.585476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.585486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.589560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.589591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.593709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.593750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.597801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.597831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.597841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.601855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.601885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.601895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.605939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.605970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.605980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.609978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.610009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.610019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.613996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.614043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.562 [2024-11-17 13:26:47.614054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.562 [2024-11-17 13:26:47.618097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.562 [2024-11-17 13:26:47.618129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.618140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.622155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.622186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.622197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.626285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.626316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.626327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.630429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.630461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.630471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.634547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.634578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.634588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.638636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.638667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.638677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.642824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.642870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.642880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.646890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.646936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.646947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.650984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.651042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.655198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.655240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.659334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.659364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.659375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.663433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.663463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.663474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.667534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.667564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.671693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.671723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.671733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.675799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.675829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.675840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.679883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.683921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.683950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.683960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.687982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.688021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.692099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.692131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.692142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.696197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.696226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.700292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.700321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.700331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.704382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.704418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.704446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.708506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.708552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.712610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.712658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.712670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.563 [2024-11-17 13:26:47.716756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.563 [2024-11-17 13:26:47.716798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.563 [2024-11-17 13:26:47.716808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.720856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.720886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.720896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.724988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.725030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.729040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.729070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.729081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.733097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.733128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.733140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.737227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.737258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.737268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.741337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.741368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.741379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.745381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.745412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.745423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.749410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.749440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.749451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.753551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.753582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.753593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.757693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.757725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.757736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.761767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.761796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.765856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.765887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.765897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.769982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.770030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.770041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.774201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.774232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.774242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.564 [2024-11-17 13:26:47.778431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.564 [2024-11-17 13:26:47.778462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.564 [2024-11-17 13:26:47.778472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.782911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.782957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.782968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.787170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.787200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.787210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.791415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.791445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.791455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.795587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.795617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.795628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.799720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.799750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.799773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.803829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.803858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.803869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.808022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.808062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.812158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.812204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.812215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.816303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.816359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.820531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.820564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.820575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.824819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.824864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.824875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.825 [2024-11-17 13:26:47.829088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.825 [2024-11-17 13:26:47.829135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.825 [2024-11-17 13:26:47.829146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.833485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.833533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.838319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.838383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.838395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 7300.00 IOPS, 912.50 MiB/s [2024-11-17T13:26:48.050Z] [2024-11-17 13:26:47.844645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.844695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.844737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.848972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.849018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.853154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.853195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.857422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.857451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.857461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.861530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.861561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.861571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.865670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.865701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.865711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.869855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.869886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.869896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.873936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.873966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.873977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.877990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.878031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.882073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.882114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.886184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.886214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.886224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.890290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.890321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.890331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.894437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.894468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.894479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.898515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.898545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.898556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.902663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.902694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.902704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.906840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.906886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.906897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.910984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.911030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.911041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.915272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.915303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.915314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.919390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.919421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.919431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.923524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.923555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.923565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.927599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.927634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.927644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.931714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.931749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.931770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.935891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.935921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.935932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.939969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.940013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.826 [2024-11-17 13:26:47.940025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.826 [2024-11-17 13:26:47.944142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.826 [2024-11-17 13:26:47.944171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.948196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.948225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.948235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.952272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.952301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.952311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.956383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.956418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.956445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.960516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.960546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.960558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.964723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.964754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.964775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.968908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.968939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.968950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.973008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.973039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.973049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.977126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.977156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.977167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.981304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.981334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.981345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.985465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.985496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.985507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.989624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.989654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.989665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.993725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.993755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.993776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:47.997816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:47.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:47.997857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.001951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.001991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.006094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.006125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.006135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.010366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.010396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.010407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.014617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.014664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.014675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.018838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.018884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.018895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.023008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.023055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.023066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.027173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.027204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.027214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.031308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.031338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.031348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.035478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.035512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.035523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:58.827 [2024-11-17 13:26:48.039632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:58.827 [2024-11-17 13:26:48.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.827 [2024-11-17 13:26:48.039674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.090 [2024-11-17 13:26:48.045302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.090 [2024-11-17 13:26:48.045361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.090 [2024-11-17 13:26:48.045373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.090 [2024-11-17 13:26:48.049770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.090 [2024-11-17 13:26:48.049802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.090 [2024-11-17 13:26:48.049813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.090 [2024-11-17 13:26:48.054062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.054094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.054104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.058362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.058393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.058403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.062498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.062539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.066582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.066613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.066623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.070813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.070858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.070869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.075264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.075311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.075323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.079847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.079902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.079913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.084213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.084263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.084275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.088793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.088851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.088863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.093270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.093318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.093330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.097630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.097677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.097688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.102075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.102153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.102165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.106418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.106464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.106475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.110681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.110728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.110739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.115091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.115139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.115165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.119429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.119475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.119486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.123660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.123708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.123719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.127846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.127896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.127907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.132096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.132140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.132166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.136274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.136318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.136329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.140465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.140512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.140523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.144683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.144732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.144759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.149173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.149235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.149251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.153411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.153470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.157646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.157692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.157703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.161822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.091 [2024-11-17 13:26:48.161867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.091 [2024-11-17 13:26:48.161879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.091 [2024-11-17 13:26:48.166118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.166163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.166174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.170277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.170324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.170335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.174516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.174563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.178708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.178755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.178768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.183028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.183089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.183121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.187341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.187389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.187400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.191553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.191604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.191615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.195740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.195796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.195807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.200056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.200103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.204378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.204462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.208592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.208639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.208650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.212829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.212874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.212885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.217121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.217169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.217182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.221438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.221483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.221495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.225585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.225631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.225642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.229858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.229904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.229915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.234030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.234075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.234087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.238352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.238410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.242674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.242719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.242731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.247308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.247357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.247369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.252047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.252081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.252094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.256676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.256726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.256749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.261236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.261283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.261294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.265571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.265617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.265628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.270035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.270082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.092 [2024-11-17 13:26:48.270094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.092 [2024-11-17 13:26:48.274580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.092 [2024-11-17 13:26:48.274627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.274639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.278940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.278988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.278999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.283240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.283287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.283298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.287366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.287397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.287408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.291463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.291494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.291504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.295596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.295636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.299912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.299942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.299953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.093 [2024-11-17 13:26:48.304265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.093 [2024-11-17 13:26:48.304293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.093 [2024-11-17 13:26:48.304315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.308666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.308698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.308709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.313576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.313607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.313618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.317995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.318026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.318037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.322429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.322460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.322471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.326996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.327042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.327053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.331453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.331484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.331494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.335710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.335741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.335751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.339957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.339986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.339997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.344129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.344157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.344168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.348318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.348346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.348356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.352534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.352580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.352591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.356692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.356762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.360850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.360881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.360892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.364981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.365011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.365022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.369093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.369123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.369134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.373163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.373193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.373204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.377231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.377262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.376 [2024-11-17 13:26:48.377272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.376 [2024-11-17 13:26:48.381363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.376 [2024-11-17 13:26:48.381394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.381404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.385560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.385591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.385601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.389689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.389719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.389729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.393824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.393854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.393865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.397882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.397911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.397922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.401953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.401994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.406099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.406129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.410230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.410260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.410270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.414336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.414366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.414377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.418397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.418427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.422504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.422534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.422545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.426573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.426604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.426614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.430751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.430790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.430817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.434894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.434939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.434950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.439024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.439080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.443181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.443211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.443221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.447309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.447339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.447349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.451397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.451427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.451439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.455487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.455527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.459630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.459660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.459671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.463736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.463776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.463787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.467792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.467839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.467850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.471917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.471946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.471956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.475952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.475996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.476007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.480085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.480117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.480128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.484166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.484195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.484206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.488232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.377 [2024-11-17 13:26:48.488260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.377 [2024-11-17 13:26:48.488271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.377 [2024-11-17 13:26:48.492340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.492369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.496382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.496437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.496448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.500539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.500568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.500579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.504696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.504743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.504771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.508784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.508814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.508824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.512813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.512843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.516899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.516928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.516938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.520958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.520987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.525029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.525059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.525069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.529093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.533112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.533142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.533152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.537187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.537217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.537227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.541219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.541249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.541259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.545341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.545370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.545380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.549475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.549505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.549516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.553551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.553582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.553592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.557770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.557800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.557811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.561849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.561879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.561889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.565952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.565982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.565992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.570083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.570123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.574154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.574184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.574194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.578278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.578308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.578319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.378 [2024-11-17 13:26:48.582667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.378 [2024-11-17 13:26:48.582697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.378 [2024-11-17 13:26:48.582707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.586944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.586990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.587002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.591394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.591424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.591435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.595788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.595817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.595827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.600080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.600110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.600120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.604494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.604538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.604549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.608924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.608953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.613469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.613499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.613509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.617827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.617857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.622068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.622099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.622109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.626273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.626314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.630465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.630506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.634556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.634598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.638649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.638679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.638689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.642824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.642870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.646975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.647021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.647032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.651171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.651211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.655424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.655454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.655464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.659590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.659624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.659634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.663779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.663808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.663818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.667867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.667896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.667906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.671987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.672017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.672027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.676105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.664 [2024-11-17 13:26:48.676133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.664 [2024-11-17 13:26:48.676144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.664 [2024-11-17 13:26:48.680191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.680220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.680231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.684287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.688550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.688581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.688592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.692751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.692790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.692801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.696899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.696930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.696940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.700964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.700995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.701005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.705069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.705100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.709175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.709205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.709215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.713352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.713382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.713393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.717608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.717639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.717650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.721867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.721898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.721908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.725989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.726031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.730149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.730180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.734267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.734297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.734308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.738437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.738468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.738479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.742503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.742533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.742543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.746547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.746583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.746593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.750705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.750752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.754843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.754877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.754904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.758970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.759004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.763181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.763216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.763227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.767290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.767324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.767335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.771433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.771467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.771479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.775551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.775584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.775596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.779708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.779742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.779754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.783818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.783856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.783867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.787926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.665 [2024-11-17 13:26:48.787958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.665 [2024-11-17 13:26:48.787969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.665 [2024-11-17 13:26:48.792022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.792054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.792066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.796186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.796219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.796246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.800322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.800356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.800368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.804437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.804484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.808546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.808582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.808610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.812788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.812869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.812898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.817039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.821196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.821230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.821241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.825295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.825329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.825341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.829416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.829450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.829461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.833558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.833591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.833602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:59.666 [2024-11-17 13:26:48.837732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8b6400) 00:16:59.666 [2024-11-17 13:26:48.837774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.666 [2024-11-17 13:26:48.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:59.666 7323.50 IOPS, 915.44 MiB/s 00:16:59.666 Latency(us) 00:16:59.666 [2024-11-17T13:26:48.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.666 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:59.666 nvme0n1 : 2.00 7325.23 915.65 0.00 0.00 2181.34 1936.29 12928.47 00:16:59.666 [2024-11-17T13:26:48.890Z] =================================================================================================================== 00:16:59.666 [2024-11-17T13:26:48.890Z] Total : 7325.23 915.65 0.00 0.00 2181.34 1936.29 12928.47 00:16:59.666 { 00:16:59.666 "results": [ 00:16:59.666 { 00:16:59.666 "job": "nvme0n1", 00:16:59.666 "core_mask": "0x2", 00:16:59.666 "workload": "randread", 00:16:59.666 "status": "finished", 00:16:59.666 "queue_depth": 16, 00:16:59.666 "io_size": 131072, 00:16:59.666 "runtime": 2.001712, 00:16:59.666 "iops": 7325.229603459438, 00:16:59.666 "mibps": 915.6537004324298, 00:16:59.666 "io_failed": 0, 00:16:59.666 "io_timeout": 0, 00:16:59.666 "avg_latency_us": 2181.3368079209886, 00:16:59.666 "min_latency_us": 1936.290909090909, 00:16:59.666 "max_latency_us": 12928.465454545454 00:16:59.666 } 00:16:59.666 ], 00:16:59.666 "core_count": 1 00:16:59.666 } 00:16:59.666 13:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:59.666 13:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:59.666 13:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:59.666 13:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:59.666 | .driver_specific 00:16:59.666 | .nvme_error 00:16:59.666 | .status_code 00:16:59.666 | .command_transient_transport_error' 00:16:59.931 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 473 > 0 )) 00:16:59.931 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80062 00:16:59.931 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80062 ']' 00:16:59.931 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80062 00:16:59.931 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80062 00:17:00.189 killing process with pid 80062 00:17:00.189 Received shutdown signal, test time was about 2.000000 seconds 00:17:00.189 00:17:00.189 Latency(us) 00:17:00.189 [2024-11-17T13:26:49.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.189 [2024-11-17T13:26:49.413Z] =================================================================================================================== 00:17:00.189 [2024-11-17T13:26:49.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80062' 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80062 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80062 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80122 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80122 /var/tmp/bperf.sock 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80122 ']' 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:00.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.189 13:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 [2024-11-17 13:26:49.421613] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:00.447 [2024-11-17 13:26:49.421922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80122 ] 00:17:00.447 [2024-11-17 13:26:49.560005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.447 [2024-11-17 13:26:49.607063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.447 [2024-11-17 13:26:49.657202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:01.384 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:01.952 nvme0n1 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:01.952 13:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:01.952 Running I/O for 2 seconds... 00:17:01.952 [2024-11-17 13:26:51.087996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7100 00:17:01.952 [2024-11-17 13:26:51.089267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.952 [2024-11-17 13:26:51.089306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:01.952 [2024-11-17 13:26:51.101003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7970 00:17:01.952 [2024-11-17 13:26:51.102223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.952 [2024-11-17 13:26:51.102259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.952 [2024-11-17 13:26:51.113782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f81e0 00:17:01.952 [2024-11-17 13:26:51.114998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.952 [2024-11-17 13:26:51.115031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:01.952 [2024-11-17 13:26:51.126634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f8a50 00:17:01.953 [2024-11-17 13:26:51.127848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.953 [2024-11-17 13:26:51.127907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:01.953 [2024-11-17 13:26:51.139489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f92c0 00:17:01.953 [2024-11-17 13:26:51.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.953 [2024-11-17 13:26:51.140706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:01.953 [2024-11-17 13:26:51.152362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f9b30 00:17:01.953 [2024-11-17 13:26:51.153533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.953 [2024-11-17 13:26:51.153566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:01.953 [2024-11-17 13:26:51.165214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fa3a0 00:17:01.953 [2024-11-17 13:26:51.166526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:01.953 [2024-11-17 13:26:51.166554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.178866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fac10 00:17:02.212 [2024-11-17 13:26:51.179995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.180026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.191877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fb480 00:17:02.212 [2024-11-17 13:26:51.193109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.193144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.204826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fbcf0 00:17:02.212 [2024-11-17 13:26:51.205926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.205960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.217712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fc560 00:17:02.212 [2024-11-17 13:26:51.218822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.230567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fcdd0 00:17:02.212 [2024-11-17 13:26:51.231639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.231670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.243472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fd640 00:17:02.212 [2024-11-17 13:26:51.244534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.244566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.256337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fdeb0 00:17:02.212 [2024-11-17 13:26:51.257515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.257543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.269421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fe720 00:17:02.212 [2024-11-17 13:26:51.270452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.270602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.282452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ff3c8 00:17:02.212 [2024-11-17 13:26:51.283472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.300686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ff3c8 00:17:02.212 [2024-11-17 13:26:51.302649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.212 [2024-11-17 13:26:51.302684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.212 [2024-11-17 13:26:51.314134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fe720 00:17:02.212 [2024-11-17 13:26:51.316072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.316137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.328125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fdeb0 00:17:02.213 [2024-11-17 13:26:51.330071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.330120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.341498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fd640 00:17:02.213 [2024-11-17 13:26:51.343403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.343435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.354539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fcdd0 00:17:02.213 [2024-11-17 13:26:51.356453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.356485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.367529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fc560 00:17:02.213 [2024-11-17 13:26:51.369542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.369571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.380479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fbcf0 00:17:02.213 [2024-11-17 13:26:51.382344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.382376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.393273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fb480 00:17:02.213 [2024-11-17 13:26:51.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.395144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.406117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fac10 00:17:02.213 [2024-11-17 13:26:51.408091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.408124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.419139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fa3a0 00:17:02.213 [2024-11-17 13:26:51.420967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.213 [2024-11-17 13:26:51.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:02.213 [2024-11-17 13:26:51.432110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f9b30 00:17:02.472 [2024-11-17 13:26:51.434304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.445744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f92c0 00:17:02.472 [2024-11-17 13:26:51.447530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.447562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.458790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f8a50 00:17:02.472 [2024-11-17 13:26:51.460588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.460622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.471828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f81e0 00:17:02.472 [2024-11-17 13:26:51.473642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.473674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.484878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7970 00:17:02.472 [2024-11-17 13:26:51.486602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.486633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.497850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7100 00:17:02.472 [2024-11-17 13:26:51.499559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.499590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.510810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f6890 00:17:02.472 [2024-11-17 13:26:51.512642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.512680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.523768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f6020 00:17:02.472 [2024-11-17 13:26:51.525467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.525499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.536584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f57b0 00:17:02.472 [2024-11-17 13:26:51.538269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.549383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f4f40 00:17:02.472 [2024-11-17 13:26:51.551056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.551088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:02.472 [2024-11-17 13:26:51.562264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f46d0 00:17:02.472 [2024-11-17 13:26:51.564036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.472 [2024-11-17 13:26:51.564063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.575219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f3e60 00:17:02.473 [2024-11-17 13:26:51.576879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.576911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.588061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f35f0 00:17:02.473 [2024-11-17 13:26:51.589697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.589727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.600924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f2d80 00:17:02.473 [2024-11-17 13:26:51.602647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.602674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.613914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f2510 00:17:02.473 [2024-11-17 13:26:51.615498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.615529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.626792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f1ca0 00:17:02.473 [2024-11-17 13:26:51.628356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.628387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.639602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f1430 00:17:02.473 [2024-11-17 13:26:51.641189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.641221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.652426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f0bc0 00:17:02.473 [2024-11-17 13:26:51.653974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.654006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.665264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f0350 00:17:02.473 [2024-11-17 13:26:51.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.666833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.678090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166efae0 00:17:02.473 [2024-11-17 13:26:51.679613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.473 [2024-11-17 13:26:51.679644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:02.473 [2024-11-17 13:26:51.692101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ef270 00:17:02.732 [2024-11-17 13:26:51.693839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.693875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.706287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eea00 00:17:02.732 [2024-11-17 13:26:51.707807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.708002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.719933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ee190 00:17:02.732 [2024-11-17 13:26:51.721451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.721485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.733286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ed920 00:17:02.732 [2024-11-17 13:26:51.734735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.734792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.746326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ed0b0 00:17:02.732 [2024-11-17 13:26:51.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.747832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.759566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ec840 00:17:02.732 [2024-11-17 13:26:51.761174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.761209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.772668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ebfd0 00:17:02.732 [2024-11-17 13:26:51.774094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.774143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.785586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eb760 00:17:02.732 [2024-11-17 13:26:51.787002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.787034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.798547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eaef0 00:17:02.732 [2024-11-17 13:26:51.799938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.799970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.811900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ea680 00:17:02.732 [2024-11-17 13:26:51.813284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.813317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.824809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e9e10 00:17:02.732 [2024-11-17 13:26:51.826280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.826315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.838061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e95a0 00:17:02.732 [2024-11-17 13:26:51.839392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.839424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.851405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e8d30 00:17:02.732 [2024-11-17 13:26:51.852809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.852842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.864402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e84c0 00:17:02.732 [2024-11-17 13:26:51.865910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.865938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.877622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e7c50 00:17:02.732 [2024-11-17 13:26:51.879069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.879247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.891735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e73e0 00:17:02.732 [2024-11-17 13:26:51.893290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.893486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.905408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e6b70 00:17:02.732 [2024-11-17 13:26:51.906870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.907060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:02.732 [2024-11-17 13:26:51.919160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e6300 00:17:02.732 [2024-11-17 13:26:51.920543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.732 [2024-11-17 13:26:51.920583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:02.733 [2024-11-17 13:26:51.932391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e5a90 00:17:02.733 [2024-11-17 13:26:51.933788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.733 [2024-11-17 13:26:51.933822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.733 [2024-11-17 13:26:51.945405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e5220 00:17:02.733 [2024-11-17 13:26:51.946625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.733 [2024-11-17 13:26:51.946657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:51.958909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e49b0 00:17:02.992 [2024-11-17 13:26:51.960107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:51.960138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:51.971834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e4140 00:17:02.992 [2024-11-17 13:26:51.973107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:51.973140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:51.984692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e38d0 00:17:02.992 [2024-11-17 13:26:51.985994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:51.986021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:51.997649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e3060 00:17:02.992 [2024-11-17 13:26:51.998821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:51.998875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.010540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e27f0 00:17:02.992 [2024-11-17 13:26:52.011681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.011834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.023622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e1f80 00:17:02.992 [2024-11-17 13:26:52.024768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.036476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e1710 00:17:02.992 [2024-11-17 13:26:52.037581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.037613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.049489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e0ea0 00:17:02.992 [2024-11-17 13:26:52.050596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.050742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.062589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e0630 00:17:02.992 [2024-11-17 13:26:52.063676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.063709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:02.992 19230.00 IOPS, 75.12 MiB/s [2024-11-17T13:26:52.216Z] [2024-11-17 13:26:52.076733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166dfdc0 00:17:02.992 [2024-11-17 13:26:52.077806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.077860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.089569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166df550 00:17:02.992 [2024-11-17 13:26:52.090641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.090674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.102555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166dece0 00:17:02.992 [2024-11-17 13:26:52.103590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.115490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166de470 00:17:02.992 [2024-11-17 13:26:52.116534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.116567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.133745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ddc00 00:17:02.992 [2024-11-17 13:26:52.135833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.135989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.146840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166de470 00:17:02.992 [2024-11-17 13:26:52.148792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.992 [2024-11-17 13:26:52.148824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:02.992 [2024-11-17 13:26:52.159669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166dece0 00:17:02.992 [2024-11-17 13:26:52.161617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.993 [2024-11-17 13:26:52.161650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:02.993 [2024-11-17 13:26:52.172505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166df550 00:17:02.993 [2024-11-17 13:26:52.174436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.993 [2024-11-17 13:26:52.174467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:02.993 [2024-11-17 13:26:52.185389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166dfdc0 00:17:02.993 [2024-11-17 13:26:52.187437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.993 [2024-11-17 13:26:52.187469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:02.993 [2024-11-17 13:26:52.198409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e0630 00:17:02.993 [2024-11-17 13:26:52.200292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:02.993 [2024-11-17 13:26:52.200323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:02.993 [2024-11-17 13:26:52.211580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e0ea0 00:17:03.252 [2024-11-17 13:26:52.213662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.213694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.224971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e1710 00:17:03.252 [2024-11-17 13:26:52.226819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.226850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.237865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e1f80 00:17:03.252 [2024-11-17 13:26:52.239691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.250744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e27f0 00:17:03.252 [2024-11-17 13:26:52.252584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.252617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.263600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e3060 00:17:03.252 [2024-11-17 13:26:52.265420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.265452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.276379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e38d0 00:17:03.252 [2024-11-17 13:26:52.278180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.278211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.289155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e4140 00:17:03.252 [2024-11-17 13:26:52.290929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.290959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.301932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e49b0 00:17:03.252 [2024-11-17 13:26:52.303821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.303858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.314971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e5220 00:17:03.252 [2024-11-17 13:26:52.316722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.316754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.328126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e5a90 00:17:03.252 [2024-11-17 13:26:52.330038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.330071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.342199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e6300 00:17:03.252 [2024-11-17 13:26:52.343938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.343972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.355901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e6b70 00:17:03.252 [2024-11-17 13:26:52.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.357688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.369402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e73e0 00:17:03.252 [2024-11-17 13:26:52.371100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.371131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.382275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e7c50 00:17:03.252 [2024-11-17 13:26:52.383946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.383976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.395228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e84c0 00:17:03.252 [2024-11-17 13:26:52.397033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.397065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.408161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e8d30 00:17:03.252 [2024-11-17 13:26:52.409875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.252 [2024-11-17 13:26:52.409905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:03.252 [2024-11-17 13:26:52.421000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e95a0 00:17:03.253 [2024-11-17 13:26:52.422620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.253 [2024-11-17 13:26:52.422651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:03.253 [2024-11-17 13:26:52.433877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166e9e10 00:17:03.253 [2024-11-17 13:26:52.435602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.253 [2024-11-17 13:26:52.435633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:03.253 [2024-11-17 13:26:52.446872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ea680 00:17:03.253 [2024-11-17 13:26:52.448473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.253 [2024-11-17 13:26:52.448504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:03.253 [2024-11-17 13:26:52.459745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eaef0 00:17:03.253 [2024-11-17 13:26:52.461342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.253 [2024-11-17 13:26:52.461373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.472888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eb760 00:17:03.511 [2024-11-17 13:26:52.474672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.474702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.486193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ebfd0 00:17:03.511 [2024-11-17 13:26:52.487732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.487789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.499119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ec840 00:17:03.511 [2024-11-17 13:26:52.500670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.500703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.512021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ed0b0 00:17:03.511 [2024-11-17 13:26:52.513696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.513731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.525031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ed920 00:17:03.511 [2024-11-17 13:26:52.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.526561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.537861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ee190 00:17:03.511 [2024-11-17 13:26:52.539346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.539377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.550722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166eea00 00:17:03.511 [2024-11-17 13:26:52.552215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.552247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.563532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ef270 00:17:03.511 [2024-11-17 13:26:52.565023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.565054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.576503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166efae0 00:17:03.511 [2024-11-17 13:26:52.578145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.578175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.589969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f0350 00:17:03.511 [2024-11-17 13:26:52.591394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.511 [2024-11-17 13:26:52.591425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:03.511 [2024-11-17 13:26:52.603008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f0bc0 00:17:03.512 [2024-11-17 13:26:52.604471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.604502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.616031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f1430 00:17:03.512 [2024-11-17 13:26:52.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.617702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.629268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f1ca0 00:17:03.512 [2024-11-17 13:26:52.630657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.630688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.642308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f2510 00:17:03.512 [2024-11-17 13:26:52.643672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.643703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.655361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f2d80 00:17:03.512 [2024-11-17 13:26:52.656974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.668516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f35f0 00:17:03.512 [2024-11-17 13:26:52.669918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.669949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.681507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f3e60 00:17:03.512 [2024-11-17 13:26:52.682859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.682888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.694424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f46d0 00:17:03.512 [2024-11-17 13:26:52.695912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.695939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.707469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f4f40 00:17:03.512 [2024-11-17 13:26:52.708791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:03.512 [2024-11-17 13:26:52.720340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f57b0 00:17:03.512 [2024-11-17 13:26:52.721629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.512 [2024-11-17 13:26:52.721660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:03.771 [2024-11-17 13:26:52.733549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f6020 00:17:03.771 [2024-11-17 13:26:52.735012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.771 [2024-11-17 13:26:52.735039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:03.771 [2024-11-17 13:26:52.746885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f6890 00:17:03.771 [2024-11-17 13:26:52.748159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.771 [2024-11-17 13:26:52.748190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:03.771 [2024-11-17 13:26:52.759837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7100 00:17:03.771 [2024-11-17 13:26:52.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.761278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.772841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f7970 00:17:03.772 [2024-11-17 13:26:52.774059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.785655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f81e0 00:17:03.772 [2024-11-17 13:26:52.786887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.786930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.798523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f8a50 00:17:03.772 [2024-11-17 13:26:52.799715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.799747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.811391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f92c0 00:17:03.772 [2024-11-17 13:26:52.812695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.812722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.824304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166f9b30 00:17:03.772 [2024-11-17 13:26:52.825490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.825522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.837156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fa3a0 00:17:03.772 [2024-11-17 13:26:52.838301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.838331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.849974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fac10 00:17:03.772 [2024-11-17 13:26:52.851225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.851253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.862993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fb480 00:17:03.772 [2024-11-17 13:26:52.864113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.864144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.875847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fbcf0 00:17:03.772 [2024-11-17 13:26:52.876956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.888961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fc560 00:17:03.772 [2024-11-17 13:26:52.890169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.890204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.902666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fcdd0 00:17:03.772 [2024-11-17 13:26:52.903740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.903826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.916461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fd640 00:17:03.772 [2024-11-17 13:26:52.917646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.917673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.930021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fdeb0 00:17:03.772 [2024-11-17 13:26:52.931064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.931097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.943127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fe720 00:17:03.772 [2024-11-17 13:26:52.944294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.944322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.956698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ff3c8 00:17:03.772 [2024-11-17 13:26:52.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.958365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.975926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166ff3c8 00:17:03.772 [2024-11-17 13:26:52.977889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.977923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.772 [2024-11-17 13:26:52.989198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fe720 00:17:03.772 [2024-11-17 13:26:52.991299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.772 [2024-11-17 13:26:52.991341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.002826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fdeb0 00:17:04.031 [2024-11-17 13:26:53.004861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.004895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.015968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fd640 00:17:04.031 [2024-11-17 13:26:53.018099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.018132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.029560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fcdd0 00:17:04.031 [2024-11-17 13:26:53.031457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.031489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.042691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fc560 00:17:04.031 [2024-11-17 13:26:53.044582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.044618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.055689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fbcf0 00:17:04.031 [2024-11-17 13:26:53.057565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.057598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:04.031 [2024-11-17 13:26:53.068976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73750) with pdu=0x2000166fb480 00:17:04.031 [2024-11-17 13:26:53.070820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.031 [2024-11-17 13:26:53.070853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:04.031 00:17:04.031 Latency(us) 00:17:04.031 [2024-11-17T13:26:53.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.031 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.031 nvme0n1 : 2.00 19279.72 75.31 0.00 0.00 6633.24 6106.76 24665.37 00:17:04.031 [2024-11-17T13:26:53.255Z] =================================================================================================================== 00:17:04.031 [2024-11-17T13:26:53.256Z] Total : 19279.72 75.31 0.00 0.00 6633.24 6106.76 24665.37 00:17:04.032 { 00:17:04.032 "results": [ 00:17:04.032 { 00:17:04.032 "job": "nvme0n1", 00:17:04.032 "core_mask": "0x2", 00:17:04.032 "workload": "randwrite", 00:17:04.032 "status": "finished", 00:17:04.032 "queue_depth": 128, 00:17:04.032 "io_size": 4096, 00:17:04.032 "runtime": 2.001429, 00:17:04.032 "iops": 19279.724636747043, 00:17:04.032 "mibps": 75.31142436229314, 00:17:04.032 "io_failed": 0, 00:17:04.032 "io_timeout": 0, 00:17:04.032 "avg_latency_us": 6633.244342583583, 00:17:04.032 "min_latency_us": 6106.763636363637, 00:17:04.032 "max_latency_us": 24665.36727272727 00:17:04.032 } 00:17:04.032 ], 00:17:04.032 "core_count": 1 00:17:04.032 } 00:17:04.032 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:04.032 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:04.032 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:04.032 | .driver_specific 00:17:04.032 | .nvme_error 00:17:04.032 | .status_code 00:17:04.032 | .command_transient_transport_error' 00:17:04.032 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80122 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80122 ']' 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80122 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80122 00:17:04.290 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:04.290 killing process with pid 80122 00:17:04.291 Received shutdown signal, test time was about 2.000000 seconds 00:17:04.291 00:17:04.291 Latency(us) 00:17:04.291 [2024-11-17T13:26:53.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.291 [2024-11-17T13:26:53.515Z] =================================================================================================================== 00:17:04.291 [2024-11-17T13:26:53.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.291 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:04.291 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80122' 00:17:04.291 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80122 00:17:04.291 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80122 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80178 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80178 /var/tmp/bperf.sock 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80178 ']' 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:04.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.549 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:04.549 [2024-11-17 13:26:53.659181] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:04.549 [2024-11-17 13:26:53.659436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80178 ] 00:17:04.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:04.549 Zero copy mechanism will not be used. 00:17:04.808 [2024-11-17 13:26:53.796292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.809 [2024-11-17 13:26:53.837060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.809 [2024-11-17 13:26:53.887622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.809 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.809 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:04.809 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:04.809 13:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.068 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:05.327 nvme0n1 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:05.327 13:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:05.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:05.587 Zero copy mechanism will not be used. 00:17:05.587 Running I/O for 2 seconds... 00:17:05.587 [2024-11-17 13:26:54.647202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.647275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.647305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.651888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.652053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.652080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.656655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.656925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.656947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.661578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.661737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.661784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.666305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.666464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.666485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.671013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.671211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.671232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.675715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.675910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.675932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.680400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.680635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.680657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.685295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.685478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.685498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.690002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.690165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.694709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.694866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.699349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.699530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.699550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.704051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.704257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.708778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.708956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.708978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.713526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.713707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.713728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.718197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.718359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.718379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.722856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.723034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.727558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.727734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.727754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.732340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.732552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.732578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.737087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.737250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.587 [2024-11-17 13:26:54.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.587 [2024-11-17 13:26:54.741796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.587 [2024-11-17 13:26:54.741960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.741980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.746429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.746595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.746615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.751166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.755866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.756029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.756050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.760720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.760858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.760880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.765696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.765961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.770672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.770836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.770857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.775403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.775576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.775596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.780168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.780330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.780349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.784982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.785149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.789752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.789904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.789924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.794439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.794621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.799189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.799327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.799348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.588 [2024-11-17 13:26:54.804102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.588 [2024-11-17 13:26:54.804236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.588 [2024-11-17 13:26:54.804255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.809266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.809435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.814165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.814333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.814353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.818886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.819098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.823607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.823769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.823818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.828421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.828587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.828608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.833223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.833402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.833422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.837930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.838062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.838082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.842630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.842818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.842839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.847312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.847510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.852058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.852236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.856853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.856996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.857015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.861609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.861837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.861863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.866355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.849 [2024-11-17 13:26:54.866515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.849 [2024-11-17 13:26:54.866535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.849 [2024-11-17 13:26:54.871204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.871378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.871398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.875931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.876107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.876128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.880678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.880912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.880937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.885412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.885543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.885563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.890095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.890281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.890301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.894772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.894938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.894958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.899487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.899648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.899667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.904332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.904540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.909185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.909315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.909335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.914046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.914202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.918816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.918992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.919012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.923570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.923710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.923730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.928267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.928453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.928474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.933102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.933235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.937940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.938094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.938114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.942647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.942854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.942876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.947389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.947579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.947608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.952069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.952264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.956873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.957034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.957054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.961617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.961861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.961884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.966433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.966593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.966613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.971253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.971391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.971411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.975901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.976047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.976067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.980658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.980837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.980858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.985391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.985572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.985593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.990358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.990586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.990885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:54.995371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.850 [2024-11-17 13:26:54.995620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.850 [2024-11-17 13:26:54.995878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.850 [2024-11-17 13:26:55.000337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.000564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.000818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.005297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.005514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.005874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.010429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.010608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.010630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.015115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.015268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.019804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.020101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.024569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.024858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.024886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.029433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.029592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.029612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.034160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.034297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.034317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.038860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.039000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.039020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.043522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.043696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.043715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.048360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.048614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.048636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.053294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.053471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.053492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.057917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.058080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.058100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.062548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.062682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.062702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.851 [2024-11-17 13:26:55.067474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:05.851 [2024-11-17 13:26:55.067591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.851 [2024-11-17 13:26:55.067611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.072515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.072762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.072794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.077738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.077913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.077933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.082406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.082563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.082583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.087074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.087231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.087251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.091770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.091971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.096555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.096750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.096772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.101448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.101606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.101626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.106195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.106355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.106375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.110908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.111040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.111060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.115563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.115722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.115742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.120353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.120605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.120627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.125244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.125422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.129985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.130115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.130135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.134615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.134839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.139329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.139499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.139519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.144018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.144181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.144202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.148809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.112 [2024-11-17 13:26:55.148998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.112 [2024-11-17 13:26:55.149018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.112 [2024-11-17 13:26:55.153447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.153609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.153628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.158166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.158325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.158344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.162858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.163019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.163038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.167560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.167748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.172208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.172391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.172436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.177236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.177412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.177432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.181929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.182098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.186610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.186741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.186772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.191315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.191497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.191517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.196152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.196311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.196331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.200927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.201064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.201084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.205609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.205801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.205821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.210441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.210599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.210619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.215164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.215354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.220089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.220285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.220305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.224813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.224934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.224954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.229426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.229572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.229592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.234405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.234631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.234652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.239053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.239228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.239248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.243686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.244065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.248504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.248576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.248600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.253526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.253595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.253616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.258572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.258765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.258803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.263707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.263821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.263855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.268538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.268609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.268631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.273445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.273532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.273553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.113 [2024-11-17 13:26:55.278173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.113 [2024-11-17 13:26:55.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.113 [2024-11-17 13:26:55.278265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.283035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.283107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.283127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.287734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.287822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.287844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.292497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.292578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.292599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.297281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.297348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.302025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.302106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.302126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.306806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.306876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.306896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.311705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.311819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.311842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.316512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.316587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.316608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.321318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.321520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.321557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.326312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.326394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.326414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.114 [2024-11-17 13:26:55.331481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.114 [2024-11-17 13:26:55.331551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.114 [2024-11-17 13:26:55.331572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.374 [2024-11-17 13:26:55.336656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.374 [2024-11-17 13:26:55.336762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.374 [2024-11-17 13:26:55.336784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.374 [2024-11-17 13:26:55.341673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.374 [2024-11-17 13:26:55.341872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.374 [2024-11-17 13:26:55.341894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.374 [2024-11-17 13:26:55.346682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.346981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.347285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.351806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.352014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.352201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.356816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.357024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.357274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.361829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.362041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.362278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.366642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.366880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.367101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.371520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.372047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.376564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.376809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.381500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.381729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.381970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.386638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.386872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.387020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.391684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.391785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.396628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.396703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.396737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.401591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.401807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.401831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.406576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.406680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.411645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.411739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.411759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.416481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.416561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.416585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.421456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.421643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.421664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.426547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.426625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.426645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.431428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.431537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.436346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.436479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.436501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.441197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.441374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.441394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.446095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.446161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.446181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.450853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.450941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.450962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.455657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.455727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.455747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.460463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.460685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.460707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.465457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.465529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.465549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.470253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.470353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.375 [2024-11-17 13:26:55.470373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.375 [2024-11-17 13:26:55.475055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.375 [2024-11-17 13:26:55.475128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.475150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.479736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.479822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.479844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.484478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.484661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.484683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.489370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.489439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.489458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.494201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.494266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.494286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.498914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.498992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.499013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.503740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.503837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.503859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.508561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.508768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.508804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.513476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.513550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.518268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.518335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.518355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.523005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.523081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.523101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.527806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.532505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.532723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.532758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.537356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.537418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.537437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.542067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.542138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.542157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.546800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.546901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.546922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.551446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.551514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.551534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.556208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.556382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.556403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.561083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.561146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.561166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.565770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.565832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.565851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.570556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.570622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.570642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.575334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.575552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.575573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.580446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.580538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.580559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.585300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.585371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.585391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.376 [2024-11-17 13:26:55.590078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.376 [2024-11-17 13:26:55.590138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.376 [2024-11-17 13:26:55.590157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.595206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.595281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.595301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.600008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.600090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.600110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.604973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.605038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.609717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.609817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.609838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.614547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.614606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.614626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.619288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.619356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.619376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.624018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.624118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.624138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.628850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.628919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.628939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.633647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.633727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.638433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.638514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.638533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.643272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x200016 6423.00 IOPS, 802.88 MiB/s [2024-11-17T13:26:55.861Z] 6ff3c8 00:17:06.637 [2024-11-17 13:26:55.643448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.643469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.637 [2024-11-17 13:26:55.649448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.637 [2024-11-17 13:26:55.649639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.637 [2024-11-17 13:26:55.649919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.654492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.655104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.659641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.659872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.664668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.664924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.665103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.669589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.669821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.670067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.674533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.674749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.674998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.679439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.679853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.684510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.684618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.684641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.689267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.689330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.689350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.693949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.694019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.694040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.698642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.698712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.698732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.703365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.703424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.703444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.708031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.708094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.708114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.712796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.712861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.712881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.717476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.717687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.717708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.722323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.722392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.722412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.727005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.727077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.727097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.731790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.731851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.731871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.736540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.736637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.736657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.741327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.741522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.741543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.746160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.746241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.746260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.750890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.750969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.750996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.755627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.755698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.755718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.760387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.760484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.760504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.765188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.765392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.770058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.770135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.770155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.774769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.774849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.774869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.638 [2024-11-17 13:26:55.779455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.638 [2024-11-17 13:26:55.779549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.638 [2024-11-17 13:26:55.779569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.784178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.784352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.784374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.789074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.789154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.789174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.793689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.793780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.793800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.798399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.798468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.803054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.803131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.803152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.807793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.807854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.807874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.812476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.812556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.812576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.817183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.817251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.817271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.821867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.821926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.826536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.826734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.831403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.831473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.831493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.836133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.836202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.836221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.840842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.840901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.840921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.845528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.845602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.845621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.850337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.850515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.850535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.639 [2024-11-17 13:26:55.855472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.639 [2024-11-17 13:26:55.855567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.639 [2024-11-17 13:26:55.855587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.860674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.860835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.860856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.865656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.865730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.865750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.870499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.870737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.870758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.875445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.875525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.875545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.880303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.880378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.880397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.885131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.885245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.885265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.889821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.889891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.894567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.894816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.899569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.899755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.900033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.904546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.904781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.905077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.909556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.909765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.910153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.914546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.915030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.919464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.919658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.919887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.924336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.924572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.924860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.898 [2024-11-17 13:26:55.929279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.898 [2024-11-17 13:26:55.929492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.898 [2024-11-17 13:26:55.929675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.934211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.934417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.934645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.939014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.939095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.939116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.943707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.943809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.948407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.948495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.948516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.953121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.953187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.953207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.957847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.957942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.957962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.962547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.962616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.962636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.967310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.967408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.972058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.972164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.972185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.976727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.976838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.981439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.981638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.981659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.986279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.986342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.986362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.990952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.991044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.991065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:55.995624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:55.995696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:55.995716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.000402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.000488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.000508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.005142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.005204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.005223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.009874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.009962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.009981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.014658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.014721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.014741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.019394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.019489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.019509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.024170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.024252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.024272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.028864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.028931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.028951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.033555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.033616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.033636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.038372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.038440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.043124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.043215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.043235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.047836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.047905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.047925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.052541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.052774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.057342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.057431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.057450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.062099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.062165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.062184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.066814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.066893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.071521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.071601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.071620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.076310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.076521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.076543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.081415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.081486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.081505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.086155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.086227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.899 [2024-11-17 13:26:56.086247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.899 [2024-11-17 13:26:56.090912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.899 [2024-11-17 13:26:56.090993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.091016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.900 [2024-11-17 13:26:56.095685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.900 [2024-11-17 13:26:56.095809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.095830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.900 [2024-11-17 13:26:56.100405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.900 [2024-11-17 13:26:56.100619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.100641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.900 [2024-11-17 13:26:56.105398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.900 [2024-11-17 13:26:56.105465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.105485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.900 [2024-11-17 13:26:56.110073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.900 [2024-11-17 13:26:56.110183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.110203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.900 [2024-11-17 13:26:56.114883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:06.900 [2024-11-17 13:26:56.114974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.900 [2024-11-17 13:26:56.114994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.119963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.120066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.120086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.125128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.125220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.129839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.129933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.129953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.134634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.134719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.134739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.139413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.139583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.139604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.144314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.160 [2024-11-17 13:26:56.144447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.160 [2024-11-17 13:26:56.144468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.160 [2024-11-17 13:26:56.149026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.149086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.149105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.153685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.153768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.153804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.158599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.158684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.158704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.163365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.163549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.163570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.168262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.168360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.168380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.173032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.173094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.173114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.177800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.177866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.177885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.182539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.182637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.182658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.187539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.187756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.187792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.192504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.192584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.192604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.197294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.197367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.197387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.202126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.202225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.206881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.206964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.211610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.211682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.211701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.216381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.216472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.216493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.221139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.221204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.221223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.225828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.225922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.225942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.230585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.230666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.230685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.235379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.235474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.235493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.240084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.240146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.240166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.244741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.161 [2024-11-17 13:26:56.244835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.161 [2024-11-17 13:26:56.244856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.161 [2024-11-17 13:26:56.249476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.249660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.249681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.254224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.254288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.254307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.258908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.258987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.259007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.263648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.263711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.268389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.268490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.268511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.273129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.273192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.273211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.277826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.277889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.277908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.282575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.282656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.282675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.287340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.287421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.287441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.292143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.292227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.292246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.296902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.296962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.296983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.301598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.301678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.301698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.306332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.306395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.306414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.311080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.311170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.311190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.315851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.315927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.315947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.320598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.320670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.320690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.325383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.325453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.325472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.330175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.330245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.334875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.334954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.334974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.339537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.339606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.339626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.344380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.344461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.344483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.349178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.349254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.349274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.353884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.162 [2024-11-17 13:26:56.353953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.162 [2024-11-17 13:26:56.353973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.162 [2024-11-17 13:26:56.358640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.163 [2024-11-17 13:26:56.358698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.163 [2024-11-17 13:26:56.358717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.163 [2024-11-17 13:26:56.363569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.163 [2024-11-17 13:26:56.363653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.163 [2024-11-17 13:26:56.363672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.163 [2024-11-17 13:26:56.368366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.163 [2024-11-17 13:26:56.368491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.163 [2024-11-17 13:26:56.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.163 [2024-11-17 13:26:56.373171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.163 [2024-11-17 13:26:56.373372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.163 [2024-11-17 13:26:56.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.163 [2024-11-17 13:26:56.378280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.163 [2024-11-17 13:26:56.378376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.163 [2024-11-17 13:26:56.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.383394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.383474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.388331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.388567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.388590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.393529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.393635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.393655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.398389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.398462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.398483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.403345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.403430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.403452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.408336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.408570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.408593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.413585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.413674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.418474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.418589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.423368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.423462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.423483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.428285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.428512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.428535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.433551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.433670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.438574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.438655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.443620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.443700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.448667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.448953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.448976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.453809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.453934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.458582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.458655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.458683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.463603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.463752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.463789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.468330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.468544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.468567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.473441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.473530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.473551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.478194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.478338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.478358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.483033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.483111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.483131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.487771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.487852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.487873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.492483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.424 [2024-11-17 13:26:56.492697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.424 [2024-11-17 13:26:56.492719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.424 [2024-11-17 13:26:56.497502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.497611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.497632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.502269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.502340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.502361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.507000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.507126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.511896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.512002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.512023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.516649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.521438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.521537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.521558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.526209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.526292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.526313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.530976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.531070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.535862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.535951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.535972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.540611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.540697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.540718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.545483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.545553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.545573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.550290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.550495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.550516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.555290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.555364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.555384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.560232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.560302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.560323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.565058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.565150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.565170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.569738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.569826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.569846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.574418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.574604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.574625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.579264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.579327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.579347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.583926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.584008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.584028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.588668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.588733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.588784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.593475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.593603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.593622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.598193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.598371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.598393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.603144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.603222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.603242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.607877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.607956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.607977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.612560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.612641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.612661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.617312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.617382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.617402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.622010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.425 [2024-11-17 13:26:56.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.425 [2024-11-17 13:26:56.622104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.425 [2024-11-17 13:26:56.626676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.426 [2024-11-17 13:26:56.626882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.426 [2024-11-17 13:26:56.626903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.426 [2024-11-17 13:26:56.631546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.426 [2024-11-17 13:26:56.631618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.426 [2024-11-17 13:26:56.631637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:07.426 [2024-11-17 13:26:56.636243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.426 [2024-11-17 13:26:56.636329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.426 [2024-11-17 13:26:56.636349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.426 [2024-11-17 13:26:56.641201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.426 [2024-11-17 13:26:56.641266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.426 [2024-11-17 13:26:56.641286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:07.685 6428.50 IOPS, 803.56 MiB/s [2024-11-17T13:26:56.909Z] [2024-11-17 13:26:56.647102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b73a90) with pdu=0x2000166ff3c8 00:17:07.685 [2024-11-17 13:26:56.647185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.685 [2024-11-17 13:26:56.647207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.685 00:17:07.685 Latency(us) 00:17:07.685 [2024-11-17T13:26:56.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:07.685 nvme0n1 : 2.00 6425.30 803.16 0.00 0.00 2484.81 1891.61 6047.19 00:17:07.685 [2024-11-17T13:26:56.909Z] =================================================================================================================== 00:17:07.685 [2024-11-17T13:26:56.909Z] Total : 6425.30 803.16 0.00 0.00 2484.81 1891.61 6047.19 00:17:07.685 { 00:17:07.685 "results": [ 00:17:07.685 { 00:17:07.685 "job": "nvme0n1", 00:17:07.685 "core_mask": "0x2", 00:17:07.685 "workload": "randwrite", 00:17:07.685 "status": "finished", 00:17:07.685 "queue_depth": 16, 00:17:07.685 "io_size": 131072, 00:17:07.685 "runtime": 2.003487, 00:17:07.685 "iops": 6425.297493819526, 00:17:07.685 "mibps": 803.1621867274407, 00:17:07.685 "io_failed": 0, 00:17:07.685 "io_timeout": 0, 00:17:07.685 "avg_latency_us": 2484.809764764871, 00:17:07.685 "min_latency_us": 1891.6072727272726, 00:17:07.685 "max_latency_us": 6047.185454545454 00:17:07.685 } 00:17:07.685 ], 00:17:07.685 "core_count": 1 00:17:07.685 } 00:17:07.685 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:07.685 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:07.685 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:07.685 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:07.685 | .driver_specific 00:17:07.685 | .nvme_error 00:17:07.685 | .status_code 00:17:07.685 | .command_transient_transport_error' 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80178 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80178 ']' 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80178 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.945 13:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80178 00:17:07.945 killing process with pid 80178 00:17:07.945 Received shutdown signal, test time was about 2.000000 seconds 00:17:07.945 00:17:07.945 Latency(us) 00:17:07.945 [2024-11-17T13:26:57.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.945 [2024-11-17T13:26:57.169Z] =================================================================================================================== 00:17:07.945 [2024-11-17T13:26:57.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.945 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.945 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.945 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80178' 00:17:07.945 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80178 00:17:07.945 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80178 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79977 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79977 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79977 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.204 killing process with pid 79977 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79977' 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79977 00:17:08.204 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79977 00:17:08.464 ************************************ 00:17:08.464 END TEST nvmf_digest_error 00:17:08.464 ************************************ 00:17:08.464 00:17:08.464 real 0m16.965s 00:17:08.464 user 0m32.336s 00:17:08.464 sys 0m5.360s 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.464 rmmod nvme_tcp 00:17:08.464 rmmod nvme_fabrics 00:17:08.464 rmmod nvme_keyring 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79977 ']' 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79977 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:17:08.464 Process with pid 79977 is not found 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79977 00:17:08.464 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79977) - No such process 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79977 is not found' 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:08.464 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:08.723 ************************************ 00:17:08.723 END TEST nvmf_digest 00:17:08.723 ************************************ 00:17:08.723 00:17:08.723 real 0m34.045s 00:17:08.723 user 1m2.835s 00:17:08.723 sys 0m11.097s 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:08.723 13:26:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:08.724 13:26:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:08.724 13:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.724 13:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.724 13:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.724 ************************************ 00:17:08.724 START TEST nvmf_host_multipath 00:17:08.724 ************************************ 00:17:08.724 13:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:08.983 * Looking for test storage... 00:17:08.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.983 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:08.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.984 --rc genhtml_branch_coverage=1 00:17:08.984 --rc genhtml_function_coverage=1 00:17:08.984 --rc genhtml_legend=1 00:17:08.984 --rc geninfo_all_blocks=1 00:17:08.984 --rc geninfo_unexecuted_blocks=1 00:17:08.984 00:17:08.984 ' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:08.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.984 --rc genhtml_branch_coverage=1 00:17:08.984 --rc genhtml_function_coverage=1 00:17:08.984 --rc genhtml_legend=1 00:17:08.984 --rc geninfo_all_blocks=1 00:17:08.984 --rc geninfo_unexecuted_blocks=1 00:17:08.984 00:17:08.984 ' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:08.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.984 --rc genhtml_branch_coverage=1 00:17:08.984 --rc genhtml_function_coverage=1 00:17:08.984 --rc genhtml_legend=1 00:17:08.984 --rc geninfo_all_blocks=1 00:17:08.984 --rc geninfo_unexecuted_blocks=1 00:17:08.984 00:17:08.984 ' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:08.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.984 --rc genhtml_branch_coverage=1 00:17:08.984 --rc genhtml_function_coverage=1 00:17:08.984 --rc genhtml_legend=1 00:17:08.984 --rc geninfo_all_blocks=1 00:17:08.984 --rc geninfo_unexecuted_blocks=1 00:17:08.984 00:17:08.984 ' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.984 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.984 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:08.985 Cannot find device "nvmf_init_br" 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:08.985 Cannot find device "nvmf_init_br2" 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:08.985 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:09.244 Cannot find device "nvmf_tgt_br" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.244 Cannot find device "nvmf_tgt_br2" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:09.244 Cannot find device "nvmf_init_br" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:09.244 Cannot find device "nvmf_init_br2" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:09.244 Cannot find device "nvmf_tgt_br" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:09.244 Cannot find device "nvmf_tgt_br2" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:09.244 Cannot find device "nvmf_br" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:09.244 Cannot find device "nvmf_init_if" 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:09.244 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:09.244 Cannot find device "nvmf_init_if2" 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:09.245 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:09.504 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:09.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:17:09.504 00:17:09.504 --- 10.0.0.3 ping statistics --- 00:17:09.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.505 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:09.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:09.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:17:09.505 00:17:09.505 --- 10.0.0.4 ping statistics --- 00:17:09.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.505 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:09.505 00:17:09.505 --- 10.0.0.1 ping statistics --- 00:17:09.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.505 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:09.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:17:09.505 00:17:09.505 --- 10.0.0.2 ping statistics --- 00:17:09.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.505 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80490 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80490 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80490 ']' 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.505 13:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:09.505 [2024-11-17 13:26:58.634357] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:09.505 [2024-11-17 13:26:58.634448] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.764 [2024-11-17 13:26:58.789255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.764 [2024-11-17 13:26:58.852384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.764 [2024-11-17 13:26:58.852471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.764 [2024-11-17 13:26:58.852487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.764 [2024-11-17 13:26:58.852497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.764 [2024-11-17 13:26:58.852507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.764 [2024-11-17 13:26:58.854048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.764 [2024-11-17 13:26:58.854071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.764 [2024-11-17 13:26:58.933845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80490 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.701 [2024-11-17 13:26:59.883920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.701 13:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:10.960 Malloc0 00:17:10.960 13:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:11.219 13:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.787 13:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:11.787 [2024-11-17 13:27:00.965443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:11.787 13:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:12.045 [2024-11-17 13:27:01.173545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:12.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80540 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80540 /var/tmp/bdevperf.sock 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80540 ']' 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.045 13:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:12.981 13:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.981 13:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:12.981 13:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:13.241 13:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:13.809 Nvme0n1 00:17:13.809 13:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:13.809 Nvme0n1 00:17:14.067 13:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:14.067 13:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:15.003 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:15.003 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:15.262 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:15.521 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:15.521 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:15.521 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80585 00:17:15.521 13:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.088 Attaching 4 probes... 00:17:22.088 @path[10.0.0.3, 4421]: 19229 00:17:22.088 @path[10.0.0.3, 4421]: 19758 00:17:22.088 @path[10.0.0.3, 4421]: 19741 00:17:22.088 @path[10.0.0.3, 4421]: 19633 00:17:22.088 @path[10.0.0.3, 4421]: 19592 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80585 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:22.088 13:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:22.088 13:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:22.347 13:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:22.347 13:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80704 00:17:22.347 13:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:22.347 13:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.910 Attaching 4 probes... 00:17:28.910 @path[10.0.0.3, 4420]: 20172 00:17:28.910 @path[10.0.0.3, 4420]: 20464 00:17:28.910 @path[10.0.0.3, 4420]: 20540 00:17:28.910 @path[10.0.0.3, 4420]: 20537 00:17:28.910 @path[10.0.0.3, 4420]: 19882 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80704 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:28.910 13:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:29.169 13:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:29.169 13:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:29.169 13:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80822 00:17:29.169 13:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.735 Attaching 4 probes... 00:17:35.735 @path[10.0.0.3, 4421]: 12850 00:17:35.735 @path[10.0.0.3, 4421]: 19090 00:17:35.735 @path[10.0.0.3, 4421]: 19103 00:17:35.735 @path[10.0.0.3, 4421]: 19094 00:17:35.735 @path[10.0.0.3, 4421]: 19045 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80822 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:35.735 13:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:35.995 13:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:35.995 13:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80940 00:17:35.995 13:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:35.995 13:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.590 Attaching 4 probes... 00:17:42.590 00:17:42.590 00:17:42.590 00:17:42.590 00:17:42.590 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80940 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:42.590 13:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:42.859 13:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:42.859 13:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81053 00:17:42.859 13:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:42.859 13:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.423 Attaching 4 probes... 00:17:49.423 @path[10.0.0.3, 4421]: 20593 00:17:49.423 @path[10.0.0.3, 4421]: 20952 00:17:49.423 @path[10.0.0.3, 4421]: 20880 00:17:49.423 @path[10.0.0.3, 4421]: 20923 00:17:49.423 @path[10.0.0.3, 4421]: 20967 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81053 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:49.423 13:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:50.799 13:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:50.799 13:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81177 00:17:50.799 13:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:50.799 13:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.363 Attaching 4 probes... 00:17:57.363 @path[10.0.0.3, 4420]: 20696 00:17:57.363 @path[10.0.0.3, 4420]: 21069 00:17:57.363 @path[10.0.0.3, 4420]: 21022 00:17:57.363 @path[10.0.0.3, 4420]: 21006 00:17:57.363 @path[10.0.0.3, 4420]: 21059 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81177 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.363 13:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:57.363 [2024-11-17 13:27:46.168648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:57.363 13:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:57.363 13:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:03.929 13:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:03.929 13:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81351 00:18:03.929 13:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:03.929 13:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.502 Attaching 4 probes... 00:18:10.502 @path[10.0.0.3, 4421]: 17554 00:18:10.502 @path[10.0.0.3, 4421]: 17836 00:18:10.502 @path[10.0.0.3, 4421]: 17880 00:18:10.502 @path[10.0.0.3, 4421]: 17928 00:18:10.502 @path[10.0.0.3, 4421]: 18020 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81351 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80540 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80540 ']' 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80540 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80540 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.502 killing process with pid 80540 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80540' 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80540 00:18:10.502 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80540 00:18:10.502 { 00:18:10.502 "results": [ 00:18:10.502 { 00:18:10.502 "job": "Nvme0n1", 00:18:10.502 "core_mask": "0x4", 00:18:10.502 "workload": "verify", 00:18:10.502 "status": "terminated", 00:18:10.502 "verify_range": { 00:18:10.502 "start": 0, 00:18:10.502 "length": 16384 00:18:10.502 }, 00:18:10.502 "queue_depth": 128, 00:18:10.502 "io_size": 4096, 00:18:10.502 "runtime": 55.680407, 00:18:10.502 "iops": 8420.05339508384, 00:18:10.502 "mibps": 32.89083357454625, 00:18:10.502 "io_failed": 0, 00:18:10.502 "io_timeout": 0, 00:18:10.502 "avg_latency_us": 15174.160839409038, 00:18:10.502 "min_latency_us": 1370.2981818181818, 00:18:10.502 "max_latency_us": 7015926.69090909 00:18:10.502 } 00:18:10.502 ], 00:18:10.502 "core_count": 1 00:18:10.502 } 00:18:10.502 [2024-11-17 13:27:01.235576] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:10.502 [2024-11-17 13:27:01.235655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80540 ] 00:18:10.502 [2024-11-17 13:27:01.372195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.502 [2024-11-17 13:27:01.422422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.502 [2024-11-17 13:27:01.473783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.502 Running I/O for 90 seconds... 00:18:10.502 9986.00 IOPS, 39.01 MiB/s [2024-11-17T13:27:59.726Z] 9919.50 IOPS, 38.75 MiB/s [2024-11-17T13:27:59.726Z] 9898.33 IOPS, 38.67 MiB/s [2024-11-17T13:27:59.727Z] 9901.75 IOPS, 38.68 MiB/s [2024-11-17T13:27:59.727Z] 9886.20 IOPS, 38.62 MiB/s [2024-11-17T13:27:59.727Z] 9877.17 IOPS, 38.58 MiB/s [2024-11-17T13:27:59.727Z] 9866.14 IOPS, 38.54 MiB/s [2024-11-17T13:27:59.727Z] 9830.88 IOPS, 38.40 MiB/s [2024-11-17T13:27:59.727Z] [2024-11-17 13:27:11.346425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.346768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.346899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.346990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.347892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.347965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.348130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.348296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.348649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.348824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.348934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.349878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.349956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.350112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.350271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.503 [2024-11-17 13:27:11.350572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.350736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.350844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.350927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.351863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.351943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.352012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.352077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.352241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.352311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.352584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.503 [2024-11-17 13:27:11.352668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.503 [2024-11-17 13:27:11.352736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.352813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.352915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.352989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.353928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.354241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.354385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.354528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.354677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.354847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.355897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.355977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.356148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.356290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.356479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.356646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.356838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.356934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.357912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.357998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.358079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.504 [2024-11-17 13:27:11.358149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.358224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.358292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.358356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.358440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.358519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:10.504 [2024-11-17 13:27:11.358667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.504 [2024-11-17 13:27:11.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.358824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.358907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.358989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.359060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.359202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.359355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.359485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.359621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.359808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.359880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.359942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.360901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.360970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361189] nvme_qpair.c: 4 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80540 00:18:10.505 13:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:10.505 74:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.361232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.361688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.361701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.364540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.505 [2024-11-17 13:27:11.364659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.364754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.505 [2024-11-17 13:27:11.364867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.505 [2024-11-17 13:27:11.364954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.365875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.365968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.366922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.366995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:11.367074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:11.367142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.506 9819.33 IOPS, 38.36 MiB/s [2024-11-17T13:27:59.730Z] 9858.40 IOPS, 38.51 MiB/s [2024-11-17T13:27:59.730Z] 9896.73 IOPS, 38.66 MiB/s [2024-11-17T13:27:59.730Z] 9926.00 IOPS, 38.77 MiB/s [2024-11-17T13:27:59.730Z] 9947.15 IOPS, 38.86 MiB/s [2024-11-17T13:27:59.730Z] 9946.86 IOPS, 38.85 MiB/s [2024-11-17T13:27:59.730Z] [2024-11-17 13:27:17.949173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.506 [2024-11-17 13:27:17.950886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.950959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.950978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.951005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.951023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.951050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.951069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.951095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.951114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.506 [2024-11-17 13:27:17.951141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.506 [2024-11-17 13:27:17.951159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.951623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.951963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.951990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.507 [2024-11-17 13:27:17.952396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.507 [2024-11-17 13:27:17.952666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.507 [2024-11-17 13:27:17.952693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.952712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.952739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.958004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.958138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.958249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.958463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.958679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.958808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.958917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.959024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.959131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.959235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.959351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.959454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.959556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.959667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.959807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.959923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.960032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.960197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.960300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.960412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.960533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.960641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.960750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.960915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.961032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.961141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.961240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.961353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.961458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.961566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.961671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.961795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.961914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.962029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.962131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.962234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.962334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.962445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.962546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.962657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.962772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.962897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.963000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.963219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.963332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.963444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.963551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.963663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.963781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.963899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.963994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.964091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.964317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.964596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.964707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.964864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.965019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.965122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.965230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.965336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.965443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.508 [2024-11-17 13:27:17.965547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.965657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.965788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.965910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.966024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.966141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.966249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.508 [2024-11-17 13:27:17.966375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.508 [2024-11-17 13:27:17.966476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.966580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.966687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.966808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.966923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.967035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.967139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.967256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.967373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.967486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.967592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.967699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.967800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.967922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.968009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.968100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.968210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.968312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.968437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.968579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.968681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.968816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.968853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.969821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.969860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.969904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.969926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.969983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:17.970358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.970963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.970983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:17.971320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.509 [2024-11-17 13:27:17.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.509 9822.27 IOPS, 38.37 MiB/s [2024-11-17T13:27:59.733Z] 9303.56 IOPS, 36.34 MiB/s [2024-11-17T13:27:59.733Z] 9319.59 IOPS, 36.40 MiB/s [2024-11-17T13:27:59.733Z] 9335.17 IOPS, 36.47 MiB/s [2024-11-17T13:27:59.733Z] 9346.16 IOPS, 36.51 MiB/s [2024-11-17T13:27:59.733Z] 9356.05 IOPS, 36.55 MiB/s [2024-11-17T13:27:59.733Z] 9359.19 IOPS, 36.56 MiB/s [2024-11-17T13:27:59.733Z] 9364.32 IOPS, 36.58 MiB/s [2024-11-17T13:27:59.733Z] [2024-11-17 13:27:25.162748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.509 [2024-11-17 13:27:25.162808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.509 [2024-11-17 13:27:25.162875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.162894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.162914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.162927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.162946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.162959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.162977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.162989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.510 [2024-11-17 13:27:25.163868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.163982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.163995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.164011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.164023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.164040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.164053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.164070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.164082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:10.510 [2024-11-17 13:27:25.164099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.510 [2024-11-17 13:27:25.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.164949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.511 [2024-11-17 13:27:25.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.165219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:10.511 [2024-11-17 13:27:25.165238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.511 [2024-11-17 13:27:25.165251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.165969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.165986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.165999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.512 [2024-11-17 13:27:25.166218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:10.512 [2024-11-17 13:27:25.166468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.512 [2024-11-17 13:27:25.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.166680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.166702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:25.167355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:25.167670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:25.167687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:10.513 8971.78 IOPS, 35.05 MiB/s [2024-11-17T13:27:59.737Z] 8597.96 IOPS, 33.59 MiB/s [2024-11-17T13:27:59.737Z] 8254.04 IOPS, 32.24 MiB/s [2024-11-17T13:27:59.737Z] 7936.58 IOPS, 31.00 MiB/s [2024-11-17T13:27:59.737Z] 7642.63 IOPS, 29.85 MiB/s [2024-11-17T13:27:59.737Z] 7369.68 IOPS, 28.79 MiB/s [2024-11-17T13:27:59.737Z] 7115.55 IOPS, 27.80 MiB/s [2024-11-17T13:27:59.737Z] 7209.27 IOPS, 28.16 MiB/s [2024-11-17T13:27:59.737Z] 7315.29 IOPS, 28.58 MiB/s [2024-11-17T13:27:59.737Z] 7412.94 IOPS, 28.96 MiB/s [2024-11-17T13:27:59.737Z] 7505.88 IOPS, 29.32 MiB/s [2024-11-17T13:27:59.737Z] 7592.41 IOPS, 29.66 MiB/s [2024-11-17T13:27:59.737Z] 7675.14 IOPS, 29.98 MiB/s [2024-11-17T13:27:59.737Z] [2024-11-17 13:27:38.612369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.612704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.612947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.513 [2024-11-17 13:27:38.612963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.613012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.613030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.613047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.613059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.613071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.613081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.613093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.513 [2024-11-17 13:27:38.613104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.513 [2024-11-17 13:27:38.613116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.514 [2024-11-17 13:27:38.613808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.613983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.613996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.614007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.614020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.614031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.614043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.614054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.614066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.614078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.514 [2024-11-17 13:27:38.614090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.514 [2024-11-17 13:27:38.614101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.614803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.614977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.614989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.515 [2024-11-17 13:27:38.615001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.615015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.615027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.615040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.615051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.615064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.615074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.515 [2024-11-17 13:27:38.615087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.515 [2024-11-17 13:27:38.615098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.516 [2024-11-17 13:27:38.615121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.516 [2024-11-17 13:27:38.615145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.516 [2024-11-17 13:27:38.615177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817310 is same with the state(6) to be set 00:18:10.516 [2024-11-17 13:27:38.615203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92456 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.516 [2024-11-17 13:27:38.615890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.516 [2024-11-17 13:27:38.615898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.516 [2024-11-17 13:27:38.615906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:18:10.516 [2024-11-17 13:27:38.615917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.615928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.615936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.615944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.615955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.615966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.615976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.615985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93056 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.615995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.616016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93064 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.616035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.616055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.616063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93072 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.616098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.616107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93080 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.616153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.616162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93088 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.517 [2024-11-17 13:27:38.616191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.517 [2024-11-17 13:27:38.616199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93096 len:8 PRP1 0x0 PRP2 0x0 00:18:10.517 [2024-11-17 13:27:38.616210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.517 [2024-11-17 13:27:38.616363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.517 [2024-11-17 13:27:38.616387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.517 [2024-11-17 13:27:38.616409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.517 [2024-11-17 13:27:38.616431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.517 [2024-11-17 13:27:38.616455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.517 [2024-11-17 13:27:38.616486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783e50 is same with the state(6) to be set 00:18:10.517 [2024-11-17 13:27:38.617490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:10.517 [2024-11-17 13:27:38.617526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783e50 (9): Bad file descriptor 00:18:10.517 [2024-11-17 13:27:38.617901] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.517 [2024-11-17 13:27:38.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x783e50 with addr=10.0.0.3, port=4421 00:18:10.517 [2024-11-17 13:27:38.617945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x783e50 is same with the state(6) to be set 00:18:10.517 [2024-11-17 13:27:38.618010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x783e50 (9): Bad file descriptor 00:18:10.517 [2024-11-17 13:27:38.618056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:10.517 [2024-11-17 13:27:38.618072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:10.517 [2024-11-17 13:27:38.618084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:10.517 [2024-11-17 13:27:38.618096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:10.517 [2024-11-17 13:27:38.618108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:10.517 7751.53 IOPS, 30.28 MiB/s [2024-11-17T13:27:59.741Z] 7827.00 IOPS, 30.57 MiB/s [2024-11-17T13:27:59.741Z] 7894.71 IOPS, 30.84 MiB/s [2024-11-17T13:27:59.741Z] 7962.64 IOPS, 31.10 MiB/s [2024-11-17T13:27:59.741Z] 8026.57 IOPS, 31.35 MiB/s [2024-11-17T13:27:59.741Z] 8086.80 IOPS, 31.59 MiB/s [2024-11-17T13:27:59.741Z] 8144.74 IOPS, 31.82 MiB/s [2024-11-17T13:27:59.741Z] 8200.72 IOPS, 32.03 MiB/s [2024-11-17T13:27:59.741Z] 8251.61 IOPS, 32.23 MiB/s [2024-11-17T13:27:59.741Z] 8291.36 IOPS, 32.39 MiB/s [2024-11-17T13:27:59.741Z] [2024-11-17 13:27:48.670689] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:10.517 8314.43 IOPS, 32.48 MiB/s [2024-11-17T13:27:59.741Z] 8328.17 IOPS, 32.53 MiB/s [2024-11-17T13:27:59.741Z] 8342.17 IOPS, 32.59 MiB/s [2024-11-17T13:27:59.741Z] 8357.55 IOPS, 32.65 MiB/s [2024-11-17T13:27:59.741Z] 8363.52 IOPS, 32.67 MiB/s [2024-11-17T13:27:59.741Z] 8374.59 IOPS, 32.71 MiB/s [2024-11-17T13:27:59.741Z] 8384.92 IOPS, 32.75 MiB/s [2024-11-17T13:27:59.741Z] 8395.32 IOPS, 32.79 MiB/s [2024-11-17T13:27:59.741Z] 8405.78 IOPS, 32.84 MiB/s [2024-11-17T13:27:59.741Z] 8415.85 IOPS, 32.87 MiB/s [2024-11-17T13:27:59.741Z] Received shutdown signal, test time was about 55.681232 seconds 00:18:10.517 00:18:10.517 Latency(us) 00:18:10.517 [2024-11-17T13:27:59.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.517 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.517 Verification LBA range: start 0x0 length 0x4000 00:18:10.517 Nvme0n1 : 55.68 8420.05 32.89 0.00 0.00 15174.16 1370.30 7015926.69 00:18:10.517 [2024-11-17T13:27:59.741Z] =================================================================================================================== 00:18:10.517 [2024-11-17T13:27:59.741Z] Total : 8420.05 32.89 0.00 0.00 15174.16 1370.30 7015926.69 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.517 rmmod nvme_tcp 00:18:10.517 rmmod nvme_fabrics 00:18:10.517 rmmod nvme_keyring 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80490 ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80490 ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.517 killing process with pid 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80490' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80490 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.517 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:10.518 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:10.777 ************************************ 00:18:10.777 END TEST nvmf_host_multipath 00:18:10.777 ************************************ 00:18:10.777 00:18:10.777 real 1m1.963s 00:18:10.777 user 2m50.706s 00:18:10.777 sys 0m18.953s 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.777 ************************************ 00:18:10.777 START TEST nvmf_timeout 00:18:10.777 ************************************ 00:18:10.777 13:27:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:11.037 * Looking for test storage... 00:18:11.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:11.037 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.038 --rc genhtml_branch_coverage=1 00:18:11.038 --rc genhtml_function_coverage=1 00:18:11.038 --rc genhtml_legend=1 00:18:11.038 --rc geninfo_all_blocks=1 00:18:11.038 --rc geninfo_unexecuted_blocks=1 00:18:11.038 00:18:11.038 ' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.038 --rc genhtml_branch_coverage=1 00:18:11.038 --rc genhtml_function_coverage=1 00:18:11.038 --rc genhtml_legend=1 00:18:11.038 --rc geninfo_all_blocks=1 00:18:11.038 --rc geninfo_unexecuted_blocks=1 00:18:11.038 00:18:11.038 ' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.038 --rc genhtml_branch_coverage=1 00:18:11.038 --rc genhtml_function_coverage=1 00:18:11.038 --rc genhtml_legend=1 00:18:11.038 --rc geninfo_all_blocks=1 00:18:11.038 --rc geninfo_unexecuted_blocks=1 00:18:11.038 00:18:11.038 ' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.038 --rc genhtml_branch_coverage=1 00:18:11.038 --rc genhtml_function_coverage=1 00:18:11.038 --rc genhtml_legend=1 00:18:11.038 --rc geninfo_all_blocks=1 00:18:11.038 --rc geninfo_unexecuted_blocks=1 00:18:11.038 00:18:11.038 ' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.038 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:11.039 Cannot find device "nvmf_init_br" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:11.039 Cannot find device "nvmf_init_br2" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:11.039 Cannot find device "nvmf_tgt_br" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.039 Cannot find device "nvmf_tgt_br2" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:11.039 Cannot find device "nvmf_init_br" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:11.039 Cannot find device "nvmf_init_br2" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:11.039 Cannot find device "nvmf_tgt_br" 00:18:11.039 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:11.298 Cannot find device "nvmf_tgt_br2" 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:11.298 Cannot find device "nvmf_br" 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:11.298 Cannot find device "nvmf_init_if" 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:11.298 Cannot find device "nvmf_init_if2" 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:11.298 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:11.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:18:11.299 00:18:11.299 --- 10.0.0.3 ping statistics --- 00:18:11.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.299 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:11.299 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:11.299 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:18:11.299 00:18:11.299 --- 10.0.0.4 ping statistics --- 00:18:11.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.299 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:11.299 00:18:11.299 --- 10.0.0.1 ping statistics --- 00:18:11.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.299 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:11.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:11.299 00:18:11.299 --- 10.0.0.2 ping statistics --- 00:18:11.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.299 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:11.299 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81721 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81721 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81721 ']' 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.558 13:28:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:11.558 [2024-11-17 13:28:00.605351] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:11.558 [2024-11-17 13:28:00.605427] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.558 [2024-11-17 13:28:00.749511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:11.817 [2024-11-17 13:28:00.796531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.817 [2024-11-17 13:28:00.796596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.817 [2024-11-17 13:28:00.796606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.817 [2024-11-17 13:28:00.796614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.817 [2024-11-17 13:28:00.796621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.817 [2024-11-17 13:28:00.797884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.817 [2024-11-17 13:28:00.797892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.817 [2024-11-17 13:28:00.868373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.386 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.386 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:12.386 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.386 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.386 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:12.645 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.645 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.645 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:12.903 [2024-11-17 13:28:01.910215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.903 13:28:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:13.162 Malloc0 00:18:13.162 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.162 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.730 [2024-11-17 13:28:02.863443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81775 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81775 /var/tmp/bdevperf.sock 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81775 ']' 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:13.730 [2024-11-17 13:28:02.942929] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:13.730 [2024-11-17 13:28:02.943034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81775 ] 00:18:13.989 [2024-11-17 13:28:03.089952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.989 [2024-11-17 13:28:03.131867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.989 [2024-11-17 13:28:03.182750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.926 13:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.926 13:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:14.926 13:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:14.926 13:28:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:15.185 NVMe0n1 00:18:15.185 13:28:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81794 00:18:15.185 13:28:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.185 13:28:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:15.444 Running I/O for 10 seconds... 00:18:16.384 13:28:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.384 7785.00 IOPS, 30.41 MiB/s [2024-11-17T13:28:05.608Z] [2024-11-17 13:28:05.546452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.384 [2024-11-17 13:28:05.546748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.546997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82b30 is same with the state(6) to be set 00:18:16.385 [2024-11-17 13:28:05.547424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.385 [2024-11-17 13:28:05.547450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.547983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.547992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.386 [2024-11-17 13:28:05.548228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.386 [2024-11-17 13:28:05.548236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.548991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.548999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.549008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.549016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.549026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.549035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.387 [2024-11-17 13:28:05.549044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.387 [2024-11-17 13:28:05.549052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.388 [2024-11-17 13:28:05.549755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.388 [2024-11-17 13:28:05.549765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.389 [2024-11-17 13:28:05.549789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.389 [2024-11-17 13:28:05.549824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.389 [2024-11-17 13:28:05.549853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.549990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.389 [2024-11-17 13:28:05.549998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed2280 is same with the state(6) to be set 00:18:16.389 [2024-11-17 13:28:05.550020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.389 [2024-11-17 13:28:05.550027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.389 [2024-11-17 13:28:05.550035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68472 len:8 PRP1 0x0 PRP2 0x0 00:18:16.389 [2024-11-17 13:28:05.550043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.389 [2024-11-17 13:28:05.550259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.389 [2024-11-17 13:28:05.550277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.389 [2024-11-17 13:28:05.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.389 [2024-11-17 13:28:05.550311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.389 [2024-11-17 13:28:05.550319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e64e50 is same with the state(6) to be set 00:18:16.389 [2024-11-17 13:28:05.550550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:16.389 [2024-11-17 13:28:05.550570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e64e50 (9): Bad file descriptor 00:18:16.389 [2024-11-17 13:28:05.550673] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.389 [2024-11-17 13:28:05.550692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e64e50 with addr=10.0.0.3, port=4420 00:18:16.389 [2024-11-17 13:28:05.550703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e64e50 is same with the state(6) to be set 00:18:16.389 [2024-11-17 13:28:05.550725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e64e50 (9): Bad file descriptor 00:18:16.389 [2024-11-17 13:28:05.550741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:16.389 [2024-11-17 13:28:05.550749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:16.389 [2024-11-17 13:28:05.550758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:16.389 [2024-11-17 13:28:05.550769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:16.389 [2024-11-17 13:28:05.550779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:16.389 13:28:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:18.263 4219.50 IOPS, 16.48 MiB/s [2024-11-17T13:28:07.746Z] 2813.00 IOPS, 10.99 MiB/s [2024-11-17T13:28:07.746Z] [2024-11-17 13:28:07.568244] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.522 [2024-11-17 13:28:07.568301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e64e50 with addr=10.0.0.3, port=4420 00:18:18.522 [2024-11-17 13:28:07.568314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e64e50 is same with the state(6) to be set 00:18:18.522 [2024-11-17 13:28:07.568333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e64e50 (9): Bad file descriptor 00:18:18.522 [2024-11-17 13:28:07.568350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:18.522 [2024-11-17 13:28:07.568358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:18.522 [2024-11-17 13:28:07.568368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:18.522 [2024-11-17 13:28:07.568378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:18.522 [2024-11-17 13:28:07.568388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:18.522 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:18.522 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.522 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:18.781 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:18.781 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:18.781 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:18.781 13:28:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:19.040 13:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:19.040 13:28:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:20.678 2109.75 IOPS, 8.24 MiB/s [2024-11-17T13:28:09.902Z] 1687.80 IOPS, 6.59 MiB/s [2024-11-17T13:28:09.902Z] [2024-11-17 13:28:09.568495] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.678 [2024-11-17 13:28:09.568562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e64e50 with addr=10.0.0.3, port=4420 00:18:20.678 [2024-11-17 13:28:09.568577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e64e50 is same with the state(6) to be set 00:18:20.678 [2024-11-17 13:28:09.568598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e64e50 (9): Bad file descriptor 00:18:20.678 [2024-11-17 13:28:09.568615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:20.678 [2024-11-17 13:28:09.568624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:20.678 [2024-11-17 13:28:09.568634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:20.678 [2024-11-17 13:28:09.568644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:20.678 [2024-11-17 13:28:09.568654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:22.623 1406.50 IOPS, 5.49 MiB/s [2024-11-17T13:28:11.847Z] 1205.57 IOPS, 4.71 MiB/s [2024-11-17T13:28:11.847Z] [2024-11-17 13:28:11.568682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:22.623 [2024-11-17 13:28:11.568728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:22.623 [2024-11-17 13:28:11.568738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:22.623 [2024-11-17 13:28:11.568746] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:22.623 [2024-11-17 13:28:11.568756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:23.560 1054.88 IOPS, 4.12 MiB/s 00:18:23.560 Latency(us) 00:18:23.560 [2024-11-17T13:28:12.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.560 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.560 Verification LBA range: start 0x0 length 0x4000 00:18:23.560 NVMe0n1 : 8.09 1043.06 4.07 15.82 0.00 120675.74 2710.81 7046430.72 00:18:23.560 [2024-11-17T13:28:12.784Z] =================================================================================================================== 00:18:23.560 [2024-11-17T13:28:12.784Z] Total : 1043.06 4.07 15.82 0.00 120675.74 2710.81 7046430.72 00:18:23.560 { 00:18:23.560 "results": [ 00:18:23.560 { 00:18:23.560 "job": "NVMe0n1", 00:18:23.560 "core_mask": "0x4", 00:18:23.560 "workload": "verify", 00:18:23.560 "status": "finished", 00:18:23.560 "verify_range": { 00:18:23.560 "start": 0, 00:18:23.560 "length": 16384 00:18:23.560 }, 00:18:23.560 "queue_depth": 128, 00:18:23.560 "io_size": 4096, 00:18:23.560 "runtime": 8.090581, 00:18:23.560 "iops": 1043.0647687724775, 00:18:23.560 "mibps": 4.07447175301749, 00:18:23.560 "io_failed": 128, 00:18:23.560 "io_timeout": 0, 00:18:23.560 "avg_latency_us": 120675.74027526344, 00:18:23.560 "min_latency_us": 2710.807272727273, 00:18:23.560 "max_latency_us": 7046430.72 00:18:23.560 } 00:18:23.560 ], 00:18:23.560 "core_count": 1 00:18:23.560 } 00:18:24.128 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:24.128 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.128 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:24.387 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:24.387 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:24.387 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:24.387 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81794 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81775 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81775 ']' 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81775 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81775 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.646 killing process with pid 81775 00:18:24.646 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81775' 00:18:24.646 Received shutdown signal, test time was about 9.277021 seconds 00:18:24.646 00:18:24.646 Latency(us) 00:18:24.646 [2024-11-17T13:28:13.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.646 [2024-11-17T13:28:13.870Z] =================================================================================================================== 00:18:24.646 [2024-11-17T13:28:13.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.647 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81775 00:18:24.647 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81775 00:18:24.906 13:28:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:24.906 [2024-11-17 13:28:14.108209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81912 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81912 /var/tmp/bdevperf.sock 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81912 ']' 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.906 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.165 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.165 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.165 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:25.165 [2024-11-17 13:28:14.171085] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:25.165 [2024-11-17 13:28:14.171159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81912 ] 00:18:25.165 [2024-11-17 13:28:14.309723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.165 [2024-11-17 13:28:14.352658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.424 [2024-11-17 13:28:14.404383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.424 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.424 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:25.424 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:25.682 13:28:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:25.941 NVMe0n1 00:18:25.941 13:28:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81927 00:18:25.941 13:28:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:25.941 13:28:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.941 Running I/O for 10 seconds... 00:18:26.877 13:28:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:27.139 8176.00 IOPS, 31.94 MiB/s [2024-11-17T13:28:16.363Z] [2024-11-17 13:28:16.276802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.276995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.277002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.277009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.277016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.139 [2024-11-17 13:28:16.277022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf820d0 is same with the state(6) to be set 00:18:27.140 [2024-11-17 13:28:16.277620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.140 [2024-11-17 13:28:16.277648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.140 [2024-11-17 13:28:16.277667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.140 [2024-11-17 13:28:16.277676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.140 [2024-11-17 13:28:16.277693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.140 [2024-11-17 13:28:16.277702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.140 [2024-11-17 13:28:16.277711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.277984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.277995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.141 [2024-11-17 13:28:16.278451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.141 [2024-11-17 13:28:16.278458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.278986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.278994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.142 [2024-11-17 13:28:16.279146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.142 [2024-11-17 13:28:16.279155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:27.143 [2024-11-17 13:28:16.279824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.143 [2024-11-17 13:28:16.279833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.143 [2024-11-17 13:28:16.279841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.144 [2024-11-17 13:28:16.279858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.144 [2024-11-17 13:28:16.279875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.144 [2024-11-17 13:28:16.279891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.144 [2024-11-17 13:28:16.279908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.144 [2024-11-17 13:28:16.279925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.279933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fd280 is same with the state(6) to be set 00:18:27.144 [2024-11-17 13:28:16.279944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:27.144 [2024-11-17 13:28:16.279951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:27.144 [2024-11-17 13:28:16.279958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:18:27.144 [2024-11-17 13:28:16.279966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.144 [2024-11-17 13:28:16.280214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:27.144 [2024-11-17 13:28:16.280299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:27.144 [2024-11-17 13:28:16.280393] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:27.144 [2024-11-17 13:28:16.280413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:27.144 [2024-11-17 13:28:16.280423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:27.144 [2024-11-17 13:28:16.280443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:27.144 [2024-11-17 13:28:16.280457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:27.144 [2024-11-17 13:28:16.280466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:27.144 [2024-11-17 13:28:16.280476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:27.144 [2024-11-17 13:28:16.280485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:27.144 [2024-11-17 13:28:16.280494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:27.144 13:28:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:28.120 4618.00 IOPS, 18.04 MiB/s [2024-11-17T13:28:17.344Z] [2024-11-17 13:28:17.280605] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.120 [2024-11-17 13:28:17.280657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:28.120 [2024-11-17 13:28:17.280669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:28.120 [2024-11-17 13:28:17.280687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:28.120 [2024-11-17 13:28:17.280702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:28.120 [2024-11-17 13:28:17.280710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:28.120 [2024-11-17 13:28:17.280720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:28.120 [2024-11-17 13:28:17.280729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:28.120 [2024-11-17 13:28:17.280739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:28.120 13:28:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:28.379 [2024-11-17 13:28:17.506065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.379 13:28:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81927 00:18:29.205 3078.67 IOPS, 12.03 MiB/s [2024-11-17T13:28:18.429Z] [2024-11-17 13:28:18.291669] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:31.082 2309.00 IOPS, 9.02 MiB/s [2024-11-17T13:28:21.242Z] 3656.20 IOPS, 14.28 MiB/s [2024-11-17T13:28:22.179Z] 4801.50 IOPS, 18.76 MiB/s [2024-11-17T13:28:23.555Z] 5748.71 IOPS, 22.46 MiB/s [2024-11-17T13:28:24.492Z] 6470.12 IOPS, 25.27 MiB/s [2024-11-17T13:28:25.429Z] 7041.00 IOPS, 27.50 MiB/s [2024-11-17T13:28:25.429Z] 7494.50 IOPS, 29.28 MiB/s 00:18:36.205 Latency(us) 00:18:36.205 [2024-11-17T13:28:25.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.205 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:36.205 Verification LBA range: start 0x0 length 0x4000 00:18:36.205 NVMe0n1 : 10.01 7502.00 29.30 0.00 0.00 17029.10 1184.12 3035150.89 00:18:36.205 [2024-11-17T13:28:25.429Z] =================================================================================================================== 00:18:36.205 [2024-11-17T13:28:25.429Z] Total : 7502.00 29.30 0.00 0.00 17029.10 1184.12 3035150.89 00:18:36.205 { 00:18:36.205 "results": [ 00:18:36.205 { 00:18:36.205 "job": "NVMe0n1", 00:18:36.205 "core_mask": "0x4", 00:18:36.205 "workload": "verify", 00:18:36.205 "status": "finished", 00:18:36.205 "verify_range": { 00:18:36.205 "start": 0, 00:18:36.205 "length": 16384 00:18:36.205 }, 00:18:36.205 "queue_depth": 128, 00:18:36.205 "io_size": 4096, 00:18:36.205 "runtime": 10.00707, 00:18:36.205 "iops": 7501.996088765243, 00:18:36.205 "mibps": 29.30467222173923, 00:18:36.205 "io_failed": 0, 00:18:36.205 "io_timeout": 0, 00:18:36.205 "avg_latency_us": 17029.098808311424, 00:18:36.205 "min_latency_us": 1184.1163636363635, 00:18:36.205 "max_latency_us": 3035150.8945454545 00:18:36.205 } 00:18:36.205 ], 00:18:36.205 "core_count": 1 00:18:36.205 } 00:18:36.205 13:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82036 00:18:36.205 13:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.205 13:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:36.205 Running I/O for 10 seconds... 00:18:37.142 13:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:37.403 9487.00 IOPS, 37.06 MiB/s [2024-11-17T13:28:26.627Z] [2024-11-17 13:28:26.431780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.403 [2024-11-17 13:28:26.431963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.431980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.431989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.431997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.403 [2024-11-17 13:28:26.432100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.403 [2024-11-17 13:28:26.432108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.404 [2024-11-17 13:28:26.432513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.404 [2024-11-17 13:28:26.432843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.404 [2024-11-17 13:28:26.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.432861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.432992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.432999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.405 [2024-11-17 13:28:26.433449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.405 [2024-11-17 13:28:26.433562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.405 [2024-11-17 13:28:26.433570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.433723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.433974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.433991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.434009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.434028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.434046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.406 [2024-11-17 13:28:26.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:37.406 [2024-11-17 13:28:26.434217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fe350 is same with the state(6) to be set 00:18:37.406 [2024-11-17 13:28:26.434236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:37.406 [2024-11-17 13:28:26.434243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:37.406 [2024-11-17 13:28:26.434250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:18:37.406 [2024-11-17 13:28:26.434258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.406 [2024-11-17 13:28:26.434496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:37.406 [2024-11-17 13:28:26.434564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:37.406 [2024-11-17 13:28:26.434651] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:37.406 [2024-11-17 13:28:26.434670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:37.406 [2024-11-17 13:28:26.434685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:37.406 [2024-11-17 13:28:26.434701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:37.406 [2024-11-17 13:28:26.434715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:37.406 [2024-11-17 13:28:26.434724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:37.406 [2024-11-17 13:28:26.434734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:37.406 [2024-11-17 13:28:26.434743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:37.406 [2024-11-17 13:28:26.434752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:37.406 13:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:38.344 5459.50 IOPS, 21.33 MiB/s [2024-11-17T13:28:27.568Z] [2024-11-17 13:28:27.434867] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.344 [2024-11-17 13:28:27.434903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:38.344 [2024-11-17 13:28:27.434915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:38.344 [2024-11-17 13:28:27.434934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:38.344 [2024-11-17 13:28:27.434950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:38.344 [2024-11-17 13:28:27.434959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:38.344 [2024-11-17 13:28:27.434969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:38.344 [2024-11-17 13:28:27.434978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:38.344 [2024-11-17 13:28:27.434987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:39.280 3639.67 IOPS, 14.22 MiB/s [2024-11-17T13:28:28.504Z] [2024-11-17 13:28:28.435050] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.280 [2024-11-17 13:28:28.435082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:39.280 [2024-11-17 13:28:28.435093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:39.280 [2024-11-17 13:28:28.435108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:39.280 [2024-11-17 13:28:28.435123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:39.280 [2024-11-17 13:28:28.435131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:39.280 [2024-11-17 13:28:28.435138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:39.280 [2024-11-17 13:28:28.435145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:39.280 [2024-11-17 13:28:28.435153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:40.475 2729.75 IOPS, 10.66 MiB/s [2024-11-17T13:28:29.699Z] [2024-11-17 13:28:29.437947] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.475 [2024-11-17 13:28:29.437980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238fe50 with addr=10.0.0.3, port=4420 00:18:40.475 [2024-11-17 13:28:29.437992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238fe50 is same with the state(6) to be set 00:18:40.475 [2024-11-17 13:28:29.438182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238fe50 (9): Bad file descriptor 00:18:40.475 [2024-11-17 13:28:29.438372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:40.475 [2024-11-17 13:28:29.438383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:40.475 [2024-11-17 13:28:29.438391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:40.475 [2024-11-17 13:28:29.438399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:40.475 [2024-11-17 13:28:29.438406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:40.475 13:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:40.475 [2024-11-17 13:28:29.683247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:40.734 13:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82036 00:18:41.301 2183.80 IOPS, 8.53 MiB/s [2024-11-17T13:28:30.525Z] [2024-11-17 13:28:30.460138] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:18:43.216 3405.83 IOPS, 13.30 MiB/s [2024-11-17T13:28:33.377Z] 4571.86 IOPS, 17.86 MiB/s [2024-11-17T13:28:34.311Z] 5452.62 IOPS, 21.30 MiB/s [2024-11-17T13:28:35.692Z] 6131.89 IOPS, 23.95 MiB/s [2024-11-17T13:28:35.692Z] 6673.60 IOPS, 26.07 MiB/s 00:18:46.468 Latency(us) 00:18:46.468 [2024-11-17T13:28:35.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.468 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:46.468 Verification LBA range: start 0x0 length 0x4000 00:18:46.468 NVMe0n1 : 10.01 6680.12 26.09 4640.09 0.00 11287.37 592.06 3019898.88 00:18:46.468 [2024-11-17T13:28:35.692Z] =================================================================================================================== 00:18:46.468 [2024-11-17T13:28:35.692Z] Total : 6680.12 26.09 4640.09 0.00 11287.37 0.00 3019898.88 00:18:46.468 { 00:18:46.468 "results": [ 00:18:46.468 { 00:18:46.468 "job": "NVMe0n1", 00:18:46.468 "core_mask": "0x4", 00:18:46.468 "workload": "verify", 00:18:46.468 "status": "finished", 00:18:46.468 "verify_range": { 00:18:46.468 "start": 0, 00:18:46.468 "length": 16384 00:18:46.468 }, 00:18:46.468 "queue_depth": 128, 00:18:46.468 "io_size": 4096, 00:18:46.468 "runtime": 10.006261, 00:18:46.468 "iops": 6680.117578384174, 00:18:46.468 "mibps": 26.094209290563178, 00:18:46.468 "io_failed": 46430, 00:18:46.468 "io_timeout": 0, 00:18:46.468 "avg_latency_us": 11287.371157597534, 00:18:46.468 "min_latency_us": 592.0581818181818, 00:18:46.468 "max_latency_us": 3019898.88 00:18:46.468 } 00:18:46.468 ], 00:18:46.468 "core_count": 1 00:18:46.468 } 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81912 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81912 ']' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81912 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81912 00:18:46.468 killing process with pid 81912 00:18:46.468 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.468 00:18:46.468 Latency(us) 00:18:46.468 [2024-11-17T13:28:35.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.468 [2024-11-17T13:28:35.692Z] =================================================================================================================== 00:18:46.468 [2024-11-17T13:28:35.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81912' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81912 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81912 00:18:46.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82146 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82146 /var/tmp/bdevperf.sock 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82146 ']' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.468 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:46.468 [2024-11-17 13:28:35.594782] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:46.468 [2024-11-17 13:28:35.594884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82146 ] 00:18:46.727 [2024-11-17 13:28:35.739234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.727 [2024-11-17 13:28:35.793016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.727 [2024-11-17 13:28:35.844330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.727 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.727 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:46.727 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82146 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:46.727 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82159 00:18:46.727 13:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:46.986 13:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:47.553 NVMe0n1 00:18:47.553 13:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82196 00:18:47.553 13:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.553 13:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:47.553 Running I/O for 10 seconds... 00:18:48.489 13:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:48.751 18800.00 IOPS, 73.44 MiB/s [2024-11-17T13:28:37.975Z] [2024-11-17 13:28:37.754780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.751 [2024-11-17 13:28:37.754972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.754979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.754986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.754992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.754999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.752 [2024-11-17 13:28:37.755565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86aa0 is same with the state(6) to be set 00:18:48.753 [2024-11-17 13:28:37.755745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.755990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.755998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.753 [2024-11-17 13:28:37.756320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.753 [2024-11-17 13:28:37.756328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.756990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.756999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.754 [2024-11-17 13:28:37.757102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.754 [2024-11-17 13:28:37.757110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.755 [2024-11-17 13:28:37.757866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.755 [2024-11-17 13:28:37.757876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.757990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.757998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.756 [2024-11-17 13:28:37.758227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce140 is same with the state(6) to be set 00:18:48.756 [2024-11-17 13:28:37.758247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:48.756 [2024-11-17 13:28:37.758253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:48.756 [2024-11-17 13:28:37.758261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32352 len:8 PRP1 0x0 PRP2 0x0 00:18:48.756 [2024-11-17 13:28:37.758269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.756 [2024-11-17 13:28:37.758597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:48.756 [2024-11-17 13:28:37.758702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e50 (9): Bad file descriptor 00:18:48.756 [2024-11-17 13:28:37.758846] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.756 [2024-11-17 13:28:37.758868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160e50 with addr=10.0.0.3, port=4420 00:18:48.756 [2024-11-17 13:28:37.758879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160e50 is same with the state(6) to be set 00:18:48.756 [2024-11-17 13:28:37.758897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e50 (9): Bad file descriptor 00:18:48.756 [2024-11-17 13:28:37.758912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:48.756 [2024-11-17 13:28:37.758921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:48.756 [2024-11-17 13:28:37.758932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:48.756 [2024-11-17 13:28:37.758942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:48.756 [2024-11-17 13:28:37.758952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:48.756 13:28:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82196 00:18:50.628 10417.00 IOPS, 40.69 MiB/s [2024-11-17T13:28:39.852Z] 6944.67 IOPS, 27.13 MiB/s [2024-11-17T13:28:39.852Z] [2024-11-17 13:28:39.759062] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.628 [2024-11-17 13:28:39.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160e50 with addr=10.0.0.3, port=4420 00:18:50.628 [2024-11-17 13:28:39.759130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160e50 is same with the state(6) to be set 00:18:50.628 [2024-11-17 13:28:39.759148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e50 (9): Bad file descriptor 00:18:50.628 [2024-11-17 13:28:39.759163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:50.628 [2024-11-17 13:28:39.759172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:50.628 [2024-11-17 13:28:39.759181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:50.628 [2024-11-17 13:28:39.759190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:50.628 [2024-11-17 13:28:39.759199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:52.501 5208.50 IOPS, 20.35 MiB/s [2024-11-17T13:28:41.985Z] 4166.80 IOPS, 16.28 MiB/s [2024-11-17T13:28:41.985Z] [2024-11-17 13:28:41.759337] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.761 [2024-11-17 13:28:41.759373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160e50 with addr=10.0.0.3, port=4420 00:18:52.761 [2024-11-17 13:28:41.759402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160e50 is same with the state(6) to be set 00:18:52.761 [2024-11-17 13:28:41.759419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e50 (9): Bad file descriptor 00:18:52.761 [2024-11-17 13:28:41.759434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:52.761 [2024-11-17 13:28:41.759443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:52.761 [2024-11-17 13:28:41.759451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:52.761 [2024-11-17 13:28:41.759464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:52.761 [2024-11-17 13:28:41.759474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:54.634 3472.33 IOPS, 13.56 MiB/s [2024-11-17T13:28:43.858Z] 2976.29 IOPS, 11.63 MiB/s [2024-11-17T13:28:43.858Z] [2024-11-17 13:28:43.759616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:54.634 [2024-11-17 13:28:43.759664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:54.634 [2024-11-17 13:28:43.759690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:54.634 [2024-11-17 13:28:43.759698] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:54.634 [2024-11-17 13:28:43.759707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:55.571 2604.25 IOPS, 10.17 MiB/s 00:18:55.571 Latency(us) 00:18:55.571 [2024-11-17T13:28:44.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.571 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:55.571 NVMe0n1 : 8.12 2564.45 10.02 15.76 0.00 49538.00 1228.80 7015926.69 00:18:55.571 [2024-11-17T13:28:44.795Z] =================================================================================================================== 00:18:55.571 [2024-11-17T13:28:44.795Z] Total : 2564.45 10.02 15.76 0.00 49538.00 1228.80 7015926.69 00:18:55.571 { 00:18:55.571 "results": [ 00:18:55.571 { 00:18:55.571 "job": "NVMe0n1", 00:18:55.571 "core_mask": "0x4", 00:18:55.571 "workload": "randread", 00:18:55.571 "status": "finished", 00:18:55.571 "queue_depth": 128, 00:18:55.571 "io_size": 4096, 00:18:55.571 "runtime": 8.12417, 00:18:55.571 "iops": 2564.4465834663724, 00:18:55.571 "mibps": 10.017369466665517, 00:18:55.571 "io_failed": 128, 00:18:55.571 "io_timeout": 0, 00:18:55.571 "avg_latency_us": 49538.002640275474, 00:18:55.571 "min_latency_us": 1228.8, 00:18:55.571 "max_latency_us": 7015926.69090909 00:18:55.571 } 00:18:55.571 ], 00:18:55.571 "core_count": 1 00:18:55.571 } 00:18:55.571 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.571 Attaching 5 probes... 00:18:55.571 1313.498002: reset bdev controller NVMe0 00:18:55.571 1313.655500: reconnect bdev controller NVMe0 00:18:55.571 3313.893337: reconnect delay bdev controller NVMe0 00:18:55.571 3313.925088: reconnect bdev controller NVMe0 00:18:55.571 5314.177027: reconnect delay bdev controller NVMe0 00:18:55.571 5314.190130: reconnect bdev controller NVMe0 00:18:55.571 7314.497217: reconnect delay bdev controller NVMe0 00:18:55.571 7314.512042: reconnect bdev controller NVMe0 00:18:55.571 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:55.571 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:55.571 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82159 00:18:55.571 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82146 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82146 ']' 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82146 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82146 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:55.830 killing process with pid 82146 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82146' 00:18:55.830 Received shutdown signal, test time was about 8.197565 seconds 00:18:55.830 00:18:55.830 Latency(us) 00:18:55.830 [2024-11-17T13:28:45.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.830 [2024-11-17T13:28:45.054Z] =================================================================================================================== 00:18:55.830 [2024-11-17T13:28:45.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82146 00:18:55.830 13:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82146 00:18:55.830 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.089 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:56.089 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:56.089 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.089 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.348 rmmod nvme_tcp 00:18:56.348 rmmod nvme_fabrics 00:18:56.348 rmmod nvme_keyring 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81721 ']' 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81721 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81721 ']' 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81721 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81721 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.348 killing process with pid 81721 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81721' 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81721 00:18:56.348 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81721 00:18:56.606 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:56.607 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:18:56.866 00:18:56.866 real 0m45.938s 00:18:56.866 user 2m13.669s 00:18:56.866 sys 0m5.743s 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.866 ************************************ 00:18:56.866 END TEST nvmf_timeout 00:18:56.866 ************************************ 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:56.866 00:18:56.866 real 4m59.213s 00:18:56.866 user 12m57.547s 00:18:56.866 sys 1m9.874s 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.866 13:28:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.866 ************************************ 00:18:56.866 END TEST nvmf_host 00:18:56.866 ************************************ 00:18:56.866 13:28:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:18:56.866 13:28:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:18:56.866 00:18:56.866 real 12m19.392s 00:18:56.866 user 29m23.595s 00:18:56.866 sys 3m12.410s 00:18:56.866 13:28:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.866 ************************************ 00:18:56.866 END TEST nvmf_tcp 00:18:56.866 13:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.866 ************************************ 00:18:56.866 13:28:46 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:18:56.866 13:28:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:56.866 13:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:56.866 13:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.866 13:28:46 -- common/autotest_common.sh@10 -- # set +x 00:18:56.866 ************************************ 00:18:56.866 START TEST nvmf_dif 00:18:56.866 ************************************ 00:18:56.866 13:28:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:57.125 * Looking for test storage... 00:18:57.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:57.125 13:28:46 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:57.125 13:28:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:18:57.125 13:28:46 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:57.125 13:28:46 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.125 13:28:46 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:57.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.126 --rc genhtml_branch_coverage=1 00:18:57.126 --rc genhtml_function_coverage=1 00:18:57.126 --rc genhtml_legend=1 00:18:57.126 --rc geninfo_all_blocks=1 00:18:57.126 --rc geninfo_unexecuted_blocks=1 00:18:57.126 00:18:57.126 ' 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:57.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.126 --rc genhtml_branch_coverage=1 00:18:57.126 --rc genhtml_function_coverage=1 00:18:57.126 --rc genhtml_legend=1 00:18:57.126 --rc geninfo_all_blocks=1 00:18:57.126 --rc geninfo_unexecuted_blocks=1 00:18:57.126 00:18:57.126 ' 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:57.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.126 --rc genhtml_branch_coverage=1 00:18:57.126 --rc genhtml_function_coverage=1 00:18:57.126 --rc genhtml_legend=1 00:18:57.126 --rc geninfo_all_blocks=1 00:18:57.126 --rc geninfo_unexecuted_blocks=1 00:18:57.126 00:18:57.126 ' 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:57.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.126 --rc genhtml_branch_coverage=1 00:18:57.126 --rc genhtml_function_coverage=1 00:18:57.126 --rc genhtml_legend=1 00:18:57.126 --rc geninfo_all_blocks=1 00:18:57.126 --rc geninfo_unexecuted_blocks=1 00:18:57.126 00:18:57.126 ' 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.126 13:28:46 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.126 13:28:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.126 13:28:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.126 13:28:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.126 13:28:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:57.126 13:28:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.126 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:57.126 13:28:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:57.126 13:28:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:57.126 Cannot find device "nvmf_init_br" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:57.126 Cannot find device "nvmf_init_br2" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:57.126 Cannot find device "nvmf_tgt_br" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@164 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.126 Cannot find device "nvmf_tgt_br2" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@165 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:57.126 Cannot find device "nvmf_init_br" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@166 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:57.126 Cannot find device "nvmf_init_br2" 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@167 -- # true 00:18:57.126 13:28:46 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:57.127 Cannot find device "nvmf_tgt_br" 00:18:57.127 13:28:46 nvmf_dif -- nvmf/common.sh@168 -- # true 00:18:57.127 13:28:46 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:57.127 Cannot find device "nvmf_tgt_br2" 00:18:57.127 13:28:46 nvmf_dif -- nvmf/common.sh@169 -- # true 00:18:57.127 13:28:46 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:57.386 Cannot find device "nvmf_br" 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@170 -- # true 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:57.386 Cannot find device "nvmf_init_if" 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@171 -- # true 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:57.386 Cannot find device "nvmf_init_if2" 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@172 -- # true 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@173 -- # true 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@174 -- # true 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:57.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:18:57.386 00:18:57.386 --- 10.0.0.3 ping statistics --- 00:18:57.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.386 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:57.386 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:57.386 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:57.386 00:18:57.386 --- 10.0.0.4 ping statistics --- 00:18:57.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.386 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:57.386 00:18:57.386 --- 10.0.0.1 ping statistics --- 00:18:57.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.386 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:57.386 13:28:46 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:57.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:57.645 00:18:57.645 --- 10.0.0.2 ping statistics --- 00:18:57.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.645 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:57.646 13:28:46 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.646 13:28:46 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:18:57.646 13:28:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:18:57.646 13:28:46 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:57.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:57.904 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:57.904 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.905 13:28:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:57.905 13:28:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82694 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.905 13:28:47 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82694 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82694 ']' 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.905 13:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:58.164 [2024-11-17 13:28:47.130915] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:58.164 [2024-11-17 13:28:47.130994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.164 [2024-11-17 13:28:47.284382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.164 [2024-11-17 13:28:47.343267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.164 [2024-11-17 13:28:47.343332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.164 [2024-11-17 13:28:47.343346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.164 [2024-11-17 13:28:47.343356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.164 [2024-11-17 13:28:47.343366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.164 [2024-11-17 13:28:47.343843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.423 [2024-11-17 13:28:47.408510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:18:58.423 13:28:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:58.423 13:28:47 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.423 13:28:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:58.423 13:28:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:58.423 [2024-11-17 13:28:47.532844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.423 13:28:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.423 13:28:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:58.423 ************************************ 00:18:58.423 START TEST fio_dif_1_default 00:18:58.423 ************************************ 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:58.423 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 bdev_null0 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 [2024-11-17 13:28:47.581020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.424 { 00:18:58.424 "params": { 00:18:58.424 "name": "Nvme$subsystem", 00:18:58.424 "trtype": "$TEST_TRANSPORT", 00:18:58.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.424 "adrfam": "ipv4", 00:18:58.424 "trsvcid": "$NVMF_PORT", 00:18:58.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.424 "hdgst": ${hdgst:-false}, 00:18:58.424 "ddgst": ${ddgst:-false} 00:18:58.424 }, 00:18:58.424 "method": "bdev_nvme_attach_controller" 00:18:58.424 } 00:18:58.424 EOF 00:18:58.424 )") 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:58.424 "params": { 00:18:58.424 "name": "Nvme0", 00:18:58.424 "trtype": "tcp", 00:18:58.424 "traddr": "10.0.0.3", 00:18:58.424 "adrfam": "ipv4", 00:18:58.424 "trsvcid": "4420", 00:18:58.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:58.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:58.424 "hdgst": false, 00:18:58.424 "ddgst": false 00:18:58.424 }, 00:18:58.424 "method": "bdev_nvme_attach_controller" 00:18:58.424 }' 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.424 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.683 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.683 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.683 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.683 13:28:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:58.683 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:58.683 fio-3.35 00:18:58.683 Starting 1 thread 00:19:10.989 00:19:10.989 filename0: (groupid=0, jobs=1): err= 0: pid=82753: Sun Nov 17 13:28:58 2024 00:19:10.989 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(411MiB/10001msec) 00:19:10.989 slat (nsec): min=5803, max=63980, avg=6969.71, stdev=2133.09 00:19:10.989 clat (usec): min=325, max=2805, avg=359.60, stdev=31.67 00:19:10.989 lat (usec): min=331, max=2815, avg=366.57, stdev=32.16 00:19:10.989 clat percentiles (usec): 00:19:10.989 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:19:10.989 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:19:10.989 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 408], 00:19:10.989 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 553], 00:19:10.989 | 99.99th=[ 1532] 00:19:10.989 bw ( KiB/s): min=40320, max=42912, per=100.00%, avg=42076.95, stdev=604.92, samples=19 00:19:10.989 iops : min=10080, max=10728, avg=10519.21, stdev=151.25, samples=19 00:19:10.989 lat (usec) : 500=99.68%, 750=0.30% 00:19:10.989 lat (msec) : 2=0.01%, 4=0.01% 00:19:10.989 cpu : usr=82.84%, sys=15.18%, ctx=32, majf=0, minf=9 00:19:10.989 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.989 issued rwts: total=105180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.989 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:10.989 00:19:10.989 Run status group 0 (all jobs): 00:19:10.989 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=411MiB (431MB), run=10001-10001msec 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.989 00:19:10.989 real 0m11.070s 00:19:10.989 user 0m8.984s 00:19:10.989 sys 0m1.797s 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:10.989 ************************************ 00:19:10.989 END TEST fio_dif_1_default 00:19:10.989 ************************************ 00:19:10.989 13:28:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:10.989 13:28:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.989 13:28:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.989 13:28:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:10.989 ************************************ 00:19:10.989 START TEST fio_dif_1_multi_subsystems 00:19:10.989 ************************************ 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:10.989 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 bdev_null0 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 [2024-11-17 13:28:58.706272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 bdev_null1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.990 { 00:19:10.990 "params": { 00:19:10.990 "name": "Nvme$subsystem", 00:19:10.990 "trtype": "$TEST_TRANSPORT", 00:19:10.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.990 "adrfam": "ipv4", 00:19:10.990 "trsvcid": "$NVMF_PORT", 00:19:10.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.990 "hdgst": ${hdgst:-false}, 00:19:10.990 "ddgst": ${ddgst:-false} 00:19:10.990 }, 00:19:10.990 "method": "bdev_nvme_attach_controller" 00:19:10.990 } 00:19:10.990 EOF 00:19:10.990 )") 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:10.990 { 00:19:10.990 "params": { 00:19:10.990 "name": "Nvme$subsystem", 00:19:10.990 "trtype": "$TEST_TRANSPORT", 00:19:10.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.990 "adrfam": "ipv4", 00:19:10.990 "trsvcid": "$NVMF_PORT", 00:19:10.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.990 "hdgst": ${hdgst:-false}, 00:19:10.990 "ddgst": ${ddgst:-false} 00:19:10.990 }, 00:19:10.990 "method": "bdev_nvme_attach_controller" 00:19:10.990 } 00:19:10.990 EOF 00:19:10.990 )") 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:10.990 "params": { 00:19:10.990 "name": "Nvme0", 00:19:10.990 "trtype": "tcp", 00:19:10.990 "traddr": "10.0.0.3", 00:19:10.990 "adrfam": "ipv4", 00:19:10.990 "trsvcid": "4420", 00:19:10.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:10.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:10.990 "hdgst": false, 00:19:10.990 "ddgst": false 00:19:10.990 }, 00:19:10.990 "method": "bdev_nvme_attach_controller" 00:19:10.990 },{ 00:19:10.990 "params": { 00:19:10.990 "name": "Nvme1", 00:19:10.990 "trtype": "tcp", 00:19:10.990 "traddr": "10.0.0.3", 00:19:10.990 "adrfam": "ipv4", 00:19:10.990 "trsvcid": "4420", 00:19:10.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.990 "hdgst": false, 00:19:10.990 "ddgst": false 00:19:10.990 }, 00:19:10.990 "method": "bdev_nvme_attach_controller" 00:19:10.990 }' 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.990 13:28:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:10.990 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:10.990 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:10.990 fio-3.35 00:19:10.990 Starting 2 threads 00:19:20.965 00:19:20.966 filename0: (groupid=0, jobs=1): err= 0: pid=82916: Sun Nov 17 13:29:09 2024 00:19:20.966 read: IOPS=5753, BW=22.5MiB/s (23.6MB/s)(225MiB/10001msec) 00:19:20.966 slat (usec): min=5, max=280, avg=12.15, stdev= 5.55 00:19:20.966 clat (usec): min=337, max=4246, avg=661.82, stdev=66.93 00:19:20.966 lat (usec): min=343, max=4275, avg=673.97, stdev=67.50 00:19:20.966 clat percentiles (usec): 00:19:20.966 | 1.00th=[ 594], 5.00th=[ 611], 10.00th=[ 619], 20.00th=[ 627], 00:19:20.966 | 30.00th=[ 635], 40.00th=[ 644], 50.00th=[ 652], 60.00th=[ 660], 00:19:20.966 | 70.00th=[ 668], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:19:20.966 | 99.00th=[ 979], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1336], 00:19:20.966 | 99.99th=[ 1975] 00:19:20.966 bw ( KiB/s): min=19206, max=23520, per=50.02%, avg=23025.16, stdev=939.51, samples=19 00:19:20.966 iops : min= 4801, max= 5880, avg=5756.26, stdev=234.99, samples=19 00:19:20.966 lat (usec) : 500=0.15%, 750=95.92%, 1000=3.21% 00:19:20.966 lat (msec) : 2=0.71%, 10=0.01% 00:19:20.966 cpu : usr=88.92%, sys=9.39%, ctx=113, majf=0, minf=0 00:19:20.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.966 issued rwts: total=57536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:20.966 filename1: (groupid=0, jobs=1): err= 0: pid=82917: Sun Nov 17 13:29:09 2024 00:19:20.966 read: IOPS=5753, BW=22.5MiB/s (23.6MB/s)(225MiB/10001msec) 00:19:20.966 slat (nsec): min=5848, max=66503, avg=12053.44, stdev=3939.58 00:19:20.966 clat (usec): min=342, max=5527, avg=662.35, stdev=76.76 00:19:20.966 lat (usec): min=350, max=5561, avg=674.41, stdev=77.41 00:19:20.966 clat percentiles (usec): 00:19:20.966 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:19:20.966 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 660], 60.00th=[ 668], 00:19:20.966 | 70.00th=[ 676], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 742], 00:19:20.966 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1745], 00:19:20.966 | 99.99th=[ 2442] 00:19:20.966 bw ( KiB/s): min=19200, max=23584, per=50.04%, avg=23031.58, stdev=942.71, samples=19 00:19:20.966 iops : min= 4800, max= 5896, avg=5757.89, stdev=235.68, samples=19 00:19:20.966 lat (usec) : 500=0.08%, 750=95.86%, 1000=3.27% 00:19:20.966 lat (msec) : 2=0.76%, 4=0.01%, 10=0.01% 00:19:20.966 cpu : usr=88.90%, sys=9.53%, ctx=29, majf=0, minf=0 00:19:20.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.966 issued rwts: total=57544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:20.966 00:19:20.966 Run status group 0 (all jobs): 00:19:20.966 READ: bw=44.9MiB/s (47.1MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=450MiB (471MB), run=10001-10001msec 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 ************************************ 00:19:20.966 END TEST fio_dif_1_multi_subsystems 00:19:20.966 ************************************ 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 00:19:20.966 real 0m11.149s 00:19:20.966 user 0m18.543s 00:19:20.966 sys 0m2.178s 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:20.966 13:29:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:20.966 13:29:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 ************************************ 00:19:20.966 START TEST fio_dif_rand_params 00:19:20.966 ************************************ 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 bdev_null0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:20.966 [2024-11-17 13:29:09.912217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:20.966 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:20.966 { 00:19:20.966 "params": { 00:19:20.966 "name": "Nvme$subsystem", 00:19:20.966 "trtype": "$TEST_TRANSPORT", 00:19:20.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.966 "adrfam": "ipv4", 00:19:20.966 "trsvcid": "$NVMF_PORT", 00:19:20.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.966 "hdgst": ${hdgst:-false}, 00:19:20.966 "ddgst": ${ddgst:-false} 00:19:20.966 }, 00:19:20.966 "method": "bdev_nvme_attach_controller" 00:19:20.967 } 00:19:20.967 EOF 00:19:20.967 )") 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:20.967 "params": { 00:19:20.967 "name": "Nvme0", 00:19:20.967 "trtype": "tcp", 00:19:20.967 "traddr": "10.0.0.3", 00:19:20.967 "adrfam": "ipv4", 00:19:20.967 "trsvcid": "4420", 00:19:20.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:20.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:20.967 "hdgst": false, 00:19:20.967 "ddgst": false 00:19:20.967 }, 00:19:20.967 "method": "bdev_nvme_attach_controller" 00:19:20.967 }' 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:20.967 13:29:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:20.967 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:20.967 ... 00:19:20.967 fio-3.35 00:19:20.967 Starting 3 threads 00:19:27.535 00:19:27.535 filename0: (groupid=0, jobs=1): err= 0: pid=83078: Sun Nov 17 13:29:15 2024 00:19:27.535 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(188MiB/5002msec) 00:19:27.535 slat (usec): min=6, max=173, avg=16.74, stdev= 8.46 00:19:27.535 clat (usec): min=9548, max=11295, avg=9922.78, stdev=193.67 00:19:27.535 lat (usec): min=9556, max=11314, avg=9939.52, stdev=194.10 00:19:27.535 clat percentiles (usec): 00:19:27.535 | 1.00th=[ 9634], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765], 00:19:27.535 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[ 9896], 00:19:27.535 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10028], 95.00th=[10290], 00:19:27.535 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11338], 99.95th=[11338], 00:19:27.535 | 99.99th=[11338] 00:19:27.535 bw ( KiB/s): min=37632, max=39168, per=33.37%, avg=38570.67, stdev=512.00, samples=9 00:19:27.535 iops : min= 294, max= 306, avg=301.33, stdev= 4.00, samples=9 00:19:27.535 lat (msec) : 10=79.55%, 20=20.45% 00:19:27.535 cpu : usr=94.52%, sys=4.66%, ctx=54, majf=0, minf=0 00:19:27.535 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.535 filename0: (groupid=0, jobs=1): err= 0: pid=83079: Sun Nov 17 13:29:15 2024 00:19:27.535 read: IOPS=301, BW=37.6MiB/s (39.5MB/s)(188MiB/5003msec) 00:19:27.535 slat (nsec): min=6888, max=50199, avg=16682.94, stdev=6703.36 00:19:27.535 clat (usec): min=9635, max=11290, avg=9922.19, stdev=192.13 00:19:27.535 lat (usec): min=9646, max=11311, avg=9938.88, stdev=192.59 00:19:27.535 clat percentiles (usec): 00:19:27.535 | 1.00th=[ 9634], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765], 00:19:27.535 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[ 9896], 00:19:27.535 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10028], 95.00th=[10159], 00:19:27.535 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11338], 99.95th=[11338], 00:19:27.535 | 99.99th=[11338] 00:19:27.535 bw ( KiB/s): min=37632, max=39168, per=33.37%, avg=38570.67, stdev=512.00, samples=9 00:19:27.535 iops : min= 294, max= 306, avg=301.33, stdev= 4.00, samples=9 00:19:27.535 lat (msec) : 10=79.42%, 20=20.58% 00:19:27.535 cpu : usr=94.56%, sys=4.76%, ctx=8, majf=0, minf=0 00:19:27.535 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.535 filename0: (groupid=0, jobs=1): err= 0: pid=83080: Sun Nov 17 13:29:15 2024 00:19:27.535 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(189MiB/5006msec) 00:19:27.535 slat (nsec): min=6108, max=63395, avg=12446.60, stdev=7164.61 00:19:27.535 clat (usec): min=6123, max=11572, avg=9919.93, stdev=249.97 00:19:27.535 lat (usec): min=6131, max=11592, avg=9932.37, stdev=250.36 00:19:27.535 clat percentiles (usec): 00:19:27.535 | 1.00th=[ 9634], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765], 00:19:27.535 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[ 9896], 00:19:27.535 | 70.00th=[10028], 80.00th=[10028], 90.00th=[10159], 95.00th=[10159], 00:19:27.535 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11600], 99.95th=[11600], 00:19:27.535 | 99.99th=[11600] 00:19:27.535 bw ( KiB/s): min=37632, max=39168, per=33.35%, avg=38553.60, stdev=485.73, samples=10 00:19:27.535 iops : min= 294, max= 306, avg=301.20, stdev= 3.79, samples=10 00:19:27.535 lat (msec) : 10=76.94%, 20=23.06% 00:19:27.535 cpu : usr=94.91%, sys=4.56%, ctx=10, majf=0, minf=0 00:19:27.535 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.535 issued rwts: total=1509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:27.535 00:19:27.535 Run status group 0 (all jobs): 00:19:27.535 READ: bw=113MiB/s (118MB/s), 37.6MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=565MiB (593MB), run=5002-5006msec 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.535 bdev_null0 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.535 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 [2024-11-17 13:29:16.086993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 bdev_null1 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 bdev_null2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:27.536 { 00:19:27.536 "params": { 00:19:27.536 "name": "Nvme$subsystem", 00:19:27.536 "trtype": "$TEST_TRANSPORT", 00:19:27.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.536 "adrfam": "ipv4", 00:19:27.536 "trsvcid": "$NVMF_PORT", 00:19:27.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.536 "hdgst": ${hdgst:-false}, 00:19:27.536 "ddgst": ${ddgst:-false} 00:19:27.536 }, 00:19:27.536 "method": "bdev_nvme_attach_controller" 00:19:27.536 } 00:19:27.536 EOF 00:19:27.536 )") 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:27.536 { 00:19:27.536 "params": { 00:19:27.536 "name": "Nvme$subsystem", 00:19:27.536 "trtype": "$TEST_TRANSPORT", 00:19:27.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.536 "adrfam": "ipv4", 00:19:27.536 "trsvcid": "$NVMF_PORT", 00:19:27.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.536 "hdgst": ${hdgst:-false}, 00:19:27.536 "ddgst": ${ddgst:-false} 00:19:27.536 }, 00:19:27.536 "method": "bdev_nvme_attach_controller" 00:19:27.536 } 00:19:27.536 EOF 00:19:27.536 )") 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:27.536 { 00:19:27.536 "params": { 00:19:27.536 "name": "Nvme$subsystem", 00:19:27.536 "trtype": "$TEST_TRANSPORT", 00:19:27.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.536 "adrfam": "ipv4", 00:19:27.536 "trsvcid": "$NVMF_PORT", 00:19:27.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.536 "hdgst": ${hdgst:-false}, 00:19:27.536 "ddgst": ${ddgst:-false} 00:19:27.536 }, 00:19:27.536 "method": "bdev_nvme_attach_controller" 00:19:27.536 } 00:19:27.536 EOF 00:19:27.536 )") 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:27.536 13:29:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:27.536 "params": { 00:19:27.536 "name": "Nvme0", 00:19:27.536 "trtype": "tcp", 00:19:27.536 "traddr": "10.0.0.3", 00:19:27.536 "adrfam": "ipv4", 00:19:27.536 "trsvcid": "4420", 00:19:27.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:27.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:27.536 "hdgst": false, 00:19:27.536 "ddgst": false 00:19:27.536 }, 00:19:27.536 "method": "bdev_nvme_attach_controller" 00:19:27.536 },{ 00:19:27.536 "params": { 00:19:27.536 "name": "Nvme1", 00:19:27.536 "trtype": "tcp", 00:19:27.536 "traddr": "10.0.0.3", 00:19:27.536 "adrfam": "ipv4", 00:19:27.537 "trsvcid": "4420", 00:19:27.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.537 "hdgst": false, 00:19:27.537 "ddgst": false 00:19:27.537 }, 00:19:27.537 "method": "bdev_nvme_attach_controller" 00:19:27.537 },{ 00:19:27.537 "params": { 00:19:27.537 "name": "Nvme2", 00:19:27.537 "trtype": "tcp", 00:19:27.537 "traddr": "10.0.0.3", 00:19:27.537 "adrfam": "ipv4", 00:19:27.537 "trsvcid": "4420", 00:19:27.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.537 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.537 "hdgst": false, 00:19:27.537 "ddgst": false 00:19:27.537 }, 00:19:27.537 "method": "bdev_nvme_attach_controller" 00:19:27.537 }' 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.537 13:29:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.537 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:27.537 ... 00:19:27.537 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:27.537 ... 00:19:27.537 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:27.537 ... 00:19:27.537 fio-3.35 00:19:27.537 Starting 24 threads 00:19:39.743 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83176: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=231, BW=926KiB/s (948kB/s)(9312KiB/10060msec) 00:19:39.743 slat (usec): min=6, max=8038, avg=27.71, stdev=250.36 00:19:39.743 clat (msec): min=7, max=138, avg=68.85, stdev=21.01 00:19:39.743 lat (msec): min=7, max=138, avg=68.88, stdev=21.01 00:19:39.743 clat percentiles (msec): 00:19:39.743 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 53], 00:19:39.743 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:19:39.743 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 103], 00:19:39.743 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 138], 00:19:39.743 | 99.99th=[ 138] 00:19:39.743 bw ( KiB/s): min= 656, max= 1424, per=4.15%, avg=924.80, stdev=189.36, samples=20 00:19:39.743 iops : min= 164, max= 356, avg=231.20, stdev=47.34, samples=20 00:19:39.743 lat (msec) : 10=0.09%, 20=2.75%, 50=15.59%, 100=75.47%, 250=6.10% 00:19:39.743 cpu : usr=37.90%, sys=1.55%, ctx=1258, majf=0, minf=9 00:19:39.743 IO depths : 1=0.1%, 2=1.5%, 4=6.3%, 8=76.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=89.4%, 8=9.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83177: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=232, BW=929KiB/s (951kB/s)(9304KiB/10020msec) 00:19:39.743 slat (usec): min=4, max=6048, avg=22.19, stdev=150.62 00:19:39.743 clat (msec): min=22, max=136, avg=68.78, stdev=19.85 00:19:39.743 lat (msec): min=22, max=136, avg=68.81, stdev=19.85 00:19:39.743 clat percentiles (msec): 00:19:39.743 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 52], 00:19:39.743 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 00:19:39.743 | 70.00th=[ 79], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 102], 00:19:39.743 | 99.00th=[ 109], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 138], 00:19:39.743 | 99.99th=[ 138] 00:19:39.743 bw ( KiB/s): min= 640, max= 1216, per=4.16%, avg=926.35, stdev=156.49, samples=20 00:19:39.743 iops : min= 160, max= 304, avg=231.55, stdev=39.13, samples=20 00:19:39.743 lat (msec) : 50=19.30%, 100=75.32%, 250=5.37% 00:19:39.743 cpu : usr=36.21%, sys=1.42%, ctx=1114, majf=0, minf=9 00:19:39.743 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83178: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=242, BW=968KiB/s (992kB/s)(9740KiB/10058msec) 00:19:39.743 slat (usec): min=3, max=12013, avg=30.31, stdev=320.01 00:19:39.743 clat (usec): min=1527, max=127519, avg=65812.19, stdev=22700.41 00:19:39.743 lat (usec): min=1535, max=127547, avg=65842.49, stdev=22702.62 00:19:39.743 clat percentiles (msec): 00:19:39.743 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 37], 20.00th=[ 51], 00:19:39.743 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:19:39.743 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 95], 95.00th=[ 99], 00:19:39.743 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 128], 99.95th=[ 128], 00:19:39.743 | 99.99th=[ 128] 00:19:39.743 bw ( KiB/s): min= 736, max= 1944, per=4.36%, avg=969.20, stdev=266.03, samples=20 00:19:39.743 iops : min= 184, max= 486, avg=242.30, stdev=66.51, samples=20 00:19:39.743 lat (msec) : 2=0.08%, 4=2.55%, 10=0.66%, 20=1.31%, 50=15.15% 00:19:39.743 lat (msec) : 100=76.22%, 250=4.02% 00:19:39.743 cpu : usr=33.96%, sys=1.72%, ctx=1188, majf=0, minf=9 00:19:39.743 IO depths : 1=0.3%, 2=0.9%, 4=2.4%, 8=80.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83179: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=248, BW=994KiB/s (1018kB/s)(9.79MiB/10085msec) 00:19:39.743 slat (usec): min=3, max=8028, avg=23.44, stdev=212.09 00:19:39.743 clat (usec): min=1272, max=143468, avg=64182.61, stdev=25498.76 00:19:39.743 lat (usec): min=1279, max=143502, avg=64206.05, stdev=25499.05 00:19:39.743 clat percentiles (usec): 00:19:39.743 | 1.00th=[ 1369], 5.00th=[ 3687], 10.00th=[ 31589], 20.00th=[ 46924], 00:19:39.743 | 30.00th=[ 55837], 40.00th=[ 60556], 50.00th=[ 65274], 60.00th=[ 70779], 00:19:39.743 | 70.00th=[ 79168], 80.00th=[ 86508], 90.00th=[ 94897], 95.00th=[ 98042], 00:19:39.743 | 99.00th=[107480], 99.50th=[109577], 99.90th=[143655], 99.95th=[143655], 00:19:39.743 | 99.99th=[143655] 00:19:39.743 bw ( KiB/s): min= 712, max= 2539, per=4.47%, avg=995.85, stdev=390.61, samples=20 00:19:39.743 iops : min= 178, max= 634, avg=248.90, stdev=97.50, samples=20 00:19:39.743 lat (msec) : 2=1.28%, 4=3.75%, 10=0.88%, 20=2.31%, 50=16.20% 00:19:39.743 lat (msec) : 100=71.43%, 250=4.15% 00:19:39.743 cpu : usr=37.79%, sys=1.62%, ctx=1128, majf=0, minf=0 00:19:39.743 IO depths : 1=0.3%, 2=1.6%, 4=5.3%, 8=77.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=89.1%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83180: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=239, BW=960KiB/s (983kB/s)(9604KiB/10007msec) 00:19:39.743 slat (usec): min=4, max=4055, avg=27.59, stdev=161.78 00:19:39.743 clat (msec): min=13, max=121, avg=66.55, stdev=20.09 00:19:39.743 lat (msec): min=13, max=121, avg=66.58, stdev=20.09 00:19:39.743 clat percentiles (msec): 00:19:39.743 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:19:39.743 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:19:39.743 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 101], 00:19:39.743 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 117], 99.95th=[ 121], 00:19:39.743 | 99.99th=[ 123] 00:19:39.743 bw ( KiB/s): min= 736, max= 1256, per=4.27%, avg=950.32, stdev=173.45, samples=19 00:19:39.743 iops : min= 184, max= 314, avg=237.58, stdev=43.36, samples=19 00:19:39.743 lat (msec) : 20=0.29%, 50=24.61%, 100=70.05%, 250=5.04% 00:19:39.743 cpu : usr=38.69%, sys=1.74%, ctx=1313, majf=0, minf=9 00:19:39.743 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.743 filename0: (groupid=0, jobs=1): err= 0: pid=83181: Sun Nov 17 13:29:27 2024 00:19:39.743 read: IOPS=233, BW=933KiB/s (955kB/s)(9332KiB/10005msec) 00:19:39.743 slat (usec): min=6, max=8035, avg=35.99, stdev=341.89 00:19:39.743 clat (msec): min=7, max=141, avg=68.45, stdev=20.02 00:19:39.743 lat (msec): min=7, max=141, avg=68.49, stdev=20.02 00:19:39.743 clat percentiles (msec): 00:19:39.743 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 51], 00:19:39.743 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:19:39.743 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 104], 00:19:39.743 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 134], 99.95th=[ 142], 00:19:39.743 | 99.99th=[ 142] 00:19:39.743 bw ( KiB/s): min= 640, max= 1200, per=4.14%, avg=920.84, stdev=169.70, samples=19 00:19:39.743 iops : min= 160, max= 300, avg=230.21, stdev=42.43, samples=19 00:19:39.743 lat (msec) : 10=0.26%, 20=0.30%, 50=18.35%, 100=75.95%, 250=5.14% 00:19:39.743 cpu : usr=37.47%, sys=1.58%, ctx=1080, majf=0, minf=9 00:19:39.743 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 complete : 0=0.0%, 4=88.8%, 8=9.9%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.743 issued rwts: total=2333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename0: (groupid=0, jobs=1): err= 0: pid=83182: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=218, BW=874KiB/s (895kB/s)(8756KiB/10014msec) 00:19:39.744 slat (usec): min=4, max=8043, avg=34.41, stdev=319.19 00:19:39.744 clat (msec): min=23, max=143, avg=72.96, stdev=20.95 00:19:39.744 lat (msec): min=23, max=143, avg=72.99, stdev=20.96 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 54], 00:19:39.744 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 83], 00:19:39.744 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 97], 95.00th=[ 107], 00:19:39.744 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 142], 99.95th=[ 144], 00:19:39.744 | 99.99th=[ 144] 00:19:39.744 bw ( KiB/s): min= 640, max= 1248, per=3.89%, avg=866.53, stdev=193.40, samples=19 00:19:39.744 iops : min= 160, max= 312, avg=216.63, stdev=48.35, samples=19 00:19:39.744 lat (msec) : 50=17.36%, 100=74.01%, 250=8.63% 00:19:39.744 cpu : usr=33.05%, sys=1.44%, ctx=936, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=3.2%, 4=12.7%, 8=69.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=90.9%, 8=6.3%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename0: (groupid=0, jobs=1): err= 0: pid=83183: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=241, BW=967KiB/s (990kB/s)(9696KiB/10030msec) 00:19:39.744 slat (usec): min=4, max=11033, avg=25.77, stdev=245.97 00:19:39.744 clat (msec): min=15, max=120, avg=66.07, stdev=19.53 00:19:39.744 lat (msec): min=15, max=120, avg=66.10, stdev=19.53 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:19:39.744 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 69], 00:19:39.744 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 101], 00:19:39.744 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:19:39.744 | 99.99th=[ 122] 00:19:39.744 bw ( KiB/s): min= 736, max= 1168, per=4.33%, avg=964.45, stdev=147.64, samples=20 00:19:39.744 iops : min= 184, max= 292, avg=241.10, stdev=36.92, samples=20 00:19:39.744 lat (msec) : 20=0.17%, 50=23.89%, 100=71.04%, 250=4.91% 00:19:39.744 cpu : usr=43.22%, sys=1.55%, ctx=1304, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83184: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=236, BW=946KiB/s (968kB/s)(9484KiB/10028msec) 00:19:39.744 slat (usec): min=4, max=10031, avg=32.52, stdev=298.86 00:19:39.744 clat (msec): min=15, max=131, avg=67.47, stdev=19.63 00:19:39.744 lat (msec): min=15, max=131, avg=67.50, stdev=19.63 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 25], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 49], 00:19:39.744 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:19:39.744 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 99], 00:19:39.744 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 130], 99.95th=[ 130], 00:19:39.744 | 99.99th=[ 132] 00:19:39.744 bw ( KiB/s): min= 744, max= 1192, per=4.24%, avg=944.20, stdev=145.57, samples=20 00:19:39.744 iops : min= 186, max= 298, avg=236.05, stdev=36.39, samples=20 00:19:39.744 lat (msec) : 20=0.59%, 50=21.43%, 100=74.15%, 250=3.84% 00:19:39.744 cpu : usr=36.90%, sys=1.27%, ctx=1111, majf=0, minf=10 00:19:39.744 IO depths : 1=0.1%, 2=0.8%, 4=3.5%, 8=79.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83185: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=239, BW=957KiB/s (980kB/s)(9588KiB/10020msec) 00:19:39.744 slat (usec): min=4, max=8044, avg=41.06, stdev=383.65 00:19:39.744 clat (msec): min=17, max=120, avg=66.68, stdev=19.03 00:19:39.744 lat (msec): min=17, max=120, avg=66.72, stdev=19.04 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 49], 00:19:39.744 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:19:39.744 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 99], 00:19:39.744 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:19:39.744 | 99.99th=[ 121] 00:19:39.744 bw ( KiB/s): min= 760, max= 1152, per=4.29%, avg=954.80, stdev=143.28, samples=20 00:19:39.744 iops : min= 190, max= 288, avg=238.70, stdev=35.82, samples=20 00:19:39.744 lat (msec) : 20=0.13%, 50=22.53%, 100=73.30%, 250=4.05% 00:19:39.744 cpu : usr=35.41%, sys=1.32%, ctx=959, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83186: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=233, BW=934KiB/s (956kB/s)(9388KiB/10051msec) 00:19:39.744 slat (usec): min=4, max=8044, avg=46.96, stdev=455.23 00:19:39.744 clat (msec): min=17, max=138, avg=68.27, stdev=20.11 00:19:39.744 lat (msec): min=17, max=138, avg=68.32, stdev=20.11 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 00:19:39.744 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:19:39.744 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 100], 00:19:39.744 | 99.00th=[ 109], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 140], 00:19:39.744 | 99.99th=[ 140] 00:19:39.744 bw ( KiB/s): min= 752, max= 1274, per=4.18%, avg=931.75, stdev=151.07, samples=20 00:19:39.744 iops : min= 188, max= 318, avg=232.90, stdev=37.70, samples=20 00:19:39.744 lat (msec) : 20=0.21%, 50=19.56%, 100=75.67%, 250=4.56% 00:19:39.744 cpu : usr=36.64%, sys=1.54%, ctx=1041, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=79.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83187: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=232, BW=931KiB/s (953kB/s)(9360KiB/10058msec) 00:19:39.744 slat (usec): min=5, max=10090, avg=33.16, stdev=354.43 00:19:39.744 clat (msec): min=13, max=132, avg=68.47, stdev=20.95 00:19:39.744 lat (msec): min=13, max=132, avg=68.50, stdev=20.94 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 56], 00:19:39.744 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:19:39.744 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 103], 00:19:39.744 | 99.00th=[ 111], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 132], 00:19:39.744 | 99.99th=[ 132] 00:19:39.744 bw ( KiB/s): min= 640, max= 1408, per=4.18%, avg=929.60, stdev=173.35, samples=20 00:19:39.744 iops : min= 160, max= 352, avg=232.40, stdev=43.34, samples=20 00:19:39.744 lat (msec) : 20=3.33%, 50=14.32%, 100=77.22%, 250=5.13% 00:19:39.744 cpu : usr=36.28%, sys=1.59%, ctx=1028, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83188: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=219, BW=877KiB/s (898kB/s)(8824KiB/10057msec) 00:19:39.744 slat (usec): min=4, max=8042, avg=34.98, stdev=340.95 00:19:39.744 clat (msec): min=9, max=143, avg=72.62, stdev=24.34 00:19:39.744 lat (msec): min=9, max=143, avg=72.66, stdev=24.34 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 45], 20.00th=[ 55], 00:19:39.744 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 80], 00:19:39.744 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 113], 00:19:39.744 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:19:39.744 | 99.99th=[ 144] 00:19:39.744 bw ( KiB/s): min= 528, max= 1536, per=3.93%, avg=876.00, stdev=245.61, samples=20 00:19:39.744 iops : min= 132, max= 384, avg=219.00, stdev=61.40, samples=20 00:19:39.744 lat (msec) : 10=0.09%, 20=3.45%, 50=13.87%, 100=72.71%, 250=9.88% 00:19:39.744 cpu : usr=37.18%, sys=1.60%, ctx=1048, majf=0, minf=9 00:19:39.744 IO depths : 1=0.1%, 2=3.5%, 4=13.9%, 8=68.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:19:39.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 complete : 0=0.0%, 4=91.5%, 8=5.5%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.744 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.744 filename1: (groupid=0, jobs=1): err= 0: pid=83189: Sun Nov 17 13:29:27 2024 00:19:39.744 read: IOPS=236, BW=948KiB/s (970kB/s)(9508KiB/10033msec) 00:19:39.744 slat (usec): min=3, max=8044, avg=37.07, stdev=355.06 00:19:39.744 clat (msec): min=10, max=120, avg=67.34, stdev=20.22 00:19:39.744 lat (msec): min=10, max=120, avg=67.38, stdev=20.23 00:19:39.744 clat percentiles (msec): 00:19:39.744 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:19:39.744 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:19:39.744 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 102], 00:19:39.744 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:19:39.744 | 99.99th=[ 122] 00:19:39.745 bw ( KiB/s): min= 712, max= 1394, per=4.25%, avg=945.70, stdev=166.23, samples=20 00:19:39.745 iops : min= 178, max= 348, avg=236.40, stdev=41.49, samples=20 00:19:39.745 lat (msec) : 20=0.42%, 50=22.00%, 100=71.98%, 250=5.60% 00:19:39.745 cpu : usr=34.52%, sys=1.65%, ctx=1323, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename1: (groupid=0, jobs=1): err= 0: pid=83190: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=213, BW=854KiB/s (875kB/s)(8576KiB/10042msec) 00:19:39.745 slat (usec): min=4, max=8034, avg=32.59, stdev=316.40 00:19:39.745 clat (msec): min=17, max=147, avg=74.67, stdev=23.59 00:19:39.745 lat (msec): min=17, max=147, avg=74.70, stdev=23.60 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 56], 00:19:39.745 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 83], 00:19:39.745 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 118], 00:19:39.745 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 148], 00:19:39.745 | 99.99th=[ 148] 00:19:39.745 bw ( KiB/s): min= 512, max= 1224, per=3.83%, avg=851.20, stdev=215.08, samples=20 00:19:39.745 iops : min= 128, max= 306, avg=212.80, stdev=53.77, samples=20 00:19:39.745 lat (msec) : 20=0.19%, 50=15.90%, 100=68.75%, 250=15.16% 00:19:39.745 cpu : usr=37.22%, sys=1.69%, ctx=1084, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=3.9%, 4=15.4%, 8=66.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=91.6%, 8=5.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename1: (groupid=0, jobs=1): err= 0: pid=83191: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=235, BW=943KiB/s (966kB/s)(9452KiB/10020msec) 00:19:39.745 slat (usec): min=4, max=8061, avg=47.62, stdev=465.70 00:19:39.745 clat (msec): min=21, max=120, avg=67.63, stdev=18.66 00:19:39.745 lat (msec): min=21, max=120, avg=67.67, stdev=18.66 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 50], 00:19:39.745 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:19:39.745 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 97], 00:19:39.745 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 115], 99.95th=[ 120], 00:19:39.745 | 99.99th=[ 121] 00:19:39.745 bw ( KiB/s): min= 768, max= 1200, per=4.22%, avg=938.70, stdev=136.91, samples=20 00:19:39.745 iops : min= 192, max= 300, avg=234.65, stdev=34.22, samples=20 00:19:39.745 lat (msec) : 50=21.75%, 100=74.95%, 250=3.30% 00:19:39.745 cpu : usr=33.41%, sys=1.19%, ctx=930, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename2: (groupid=0, jobs=1): err= 0: pid=83192: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=230, BW=921KiB/s (943kB/s)(9212KiB/10005msec) 00:19:39.745 slat (usec): min=5, max=8043, avg=30.02, stdev=251.87 00:19:39.745 clat (msec): min=6, max=130, avg=69.33, stdev=21.46 00:19:39.745 lat (msec): min=6, max=130, avg=69.36, stdev=21.46 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 22], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:19:39.745 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 74], 00:19:39.745 | 70.00th=[ 86], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 101], 00:19:39.745 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 130], 99.95th=[ 131], 00:19:39.745 | 99.99th=[ 131] 00:19:39.745 bw ( KiB/s): min= 592, max= 1304, per=4.09%, avg=909.05, stdev=225.99, samples=19 00:19:39.745 iops : min= 148, max= 326, avg=227.26, stdev=56.50, samples=19 00:19:39.745 lat (msec) : 10=0.26%, 20=0.56%, 50=21.45%, 100=72.73%, 250=4.99% 00:19:39.745 cpu : usr=42.45%, sys=1.80%, ctx=1335, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.6%, 16=14.3%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename2: (groupid=0, jobs=1): err= 0: pid=83193: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=221, BW=885KiB/s (906kB/s)(8892KiB/10048msec) 00:19:39.745 slat (usec): min=3, max=8044, avg=24.33, stdev=191.00 00:19:39.745 clat (msec): min=14, max=144, avg=72.09, stdev=22.57 00:19:39.745 lat (msec): min=14, max=144, avg=72.11, stdev=22.57 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 55], 00:19:39.745 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 74], 00:19:39.745 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 106], 00:19:39.745 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:19:39.745 | 99.99th=[ 146] 00:19:39.745 bw ( KiB/s): min= 544, max= 1154, per=3.98%, avg=885.30, stdev=179.48, samples=20 00:19:39.745 iops : min= 136, max= 288, avg=221.30, stdev=44.83, samples=20 00:19:39.745 lat (msec) : 20=0.09%, 50=17.59%, 100=70.54%, 250=11.79% 00:19:39.745 cpu : usr=35.58%, sys=2.02%, ctx=1430, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=2.3%, 4=8.9%, 8=73.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=90.0%, 8=8.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename2: (groupid=0, jobs=1): err= 0: pid=83194: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=226, BW=905KiB/s (927kB/s)(9080KiB/10035msec) 00:19:39.745 slat (usec): min=4, max=4035, avg=23.33, stdev=146.33 00:19:39.745 clat (msec): min=24, max=132, avg=70.57, stdev=19.82 00:19:39.745 lat (msec): min=24, max=132, avg=70.59, stdev=19.82 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 54], 00:19:39.745 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:19:39.745 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 105], 00:19:39.745 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 133], 00:19:39.745 | 99.99th=[ 133] 00:19:39.745 bw ( KiB/s): min= 640, max= 1200, per=4.06%, avg=903.60, stdev=169.83, samples=20 00:19:39.745 iops : min= 160, max= 300, avg=225.90, stdev=42.46, samples=20 00:19:39.745 lat (msec) : 50=17.80%, 100=75.20%, 250=7.00% 00:19:39.745 cpu : usr=36.69%, sys=1.59%, ctx=1290, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename2: (groupid=0, jobs=1): err= 0: pid=83195: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=249, BW=997KiB/s (1021kB/s)(9972KiB/10001msec) 00:19:39.745 slat (usec): min=4, max=8031, avg=28.91, stdev=227.32 00:19:39.745 clat (usec): min=1027, max=132173, avg=64030.97, stdev=27309.36 00:19:39.745 lat (usec): min=1033, max=132185, avg=64059.89, stdev=27315.14 00:19:39.745 clat percentiles (usec): 00:19:39.745 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 35390], 20.00th=[ 44303], 00:19:39.745 | 30.00th=[ 51119], 40.00th=[ 59507], 50.00th=[ 63701], 60.00th=[ 69731], 00:19:39.745 | 70.00th=[ 82314], 80.00th=[ 89654], 90.00th=[ 95945], 95.00th=[102237], 00:19:39.745 | 99.00th=[120062], 99.50th=[123208], 99.90th=[131597], 99.95th=[132645], 00:19:39.745 | 99.99th=[132645] 00:19:39.745 bw ( KiB/s): min= 528, max= 1224, per=4.11%, avg=914.95, stdev=208.05, samples=19 00:19:39.745 iops : min= 132, max= 306, avg=228.74, stdev=52.01, samples=19 00:19:39.745 lat (msec) : 2=5.78%, 4=0.52%, 10=1.00%, 20=0.40%, 50=21.26% 00:19:39.745 lat (msec) : 100=65.58%, 250=5.46% 00:19:39.745 cpu : usr=39.95%, sys=1.90%, ctx=1097, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=89.2%, 8=9.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.745 filename2: (groupid=0, jobs=1): err= 0: pid=83196: Sun Nov 17 13:29:27 2024 00:19:39.745 read: IOPS=218, BW=874KiB/s (895kB/s)(8788KiB/10051msec) 00:19:39.745 slat (usec): min=5, max=8041, avg=25.48, stdev=191.56 00:19:39.745 clat (msec): min=6, max=139, avg=72.92, stdev=22.15 00:19:39.745 lat (msec): min=6, max=139, avg=72.95, stdev=22.15 00:19:39.745 clat percentiles (msec): 00:19:39.745 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:19:39.745 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 81], 00:19:39.745 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 100], 95.00th=[ 107], 00:19:39.745 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 140], 00:19:39.745 | 99.99th=[ 140] 00:19:39.745 bw ( KiB/s): min= 529, max= 1280, per=3.92%, avg=872.75, stdev=206.14, samples=20 00:19:39.745 iops : min= 132, max= 320, avg=218.10, stdev=51.52, samples=20 00:19:39.745 lat (msec) : 10=0.18%, 20=1.91%, 50=12.02%, 100=76.42%, 250=9.47% 00:19:39.745 cpu : usr=38.08%, sys=1.43%, ctx=1031, majf=0, minf=9 00:19:39.745 IO depths : 1=0.1%, 2=3.4%, 4=13.2%, 8=68.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:19:39.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 complete : 0=0.0%, 4=91.3%, 8=5.8%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.745 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.746 filename2: (groupid=0, jobs=1): err= 0: pid=83197: Sun Nov 17 13:29:27 2024 00:19:39.746 read: IOPS=228, BW=915KiB/s (937kB/s)(9200KiB/10059msec) 00:19:39.746 slat (usec): min=6, max=8038, avg=37.95, stdev=342.92 00:19:39.746 clat (msec): min=10, max=144, avg=69.69, stdev=22.03 00:19:39.746 lat (msec): min=10, max=144, avg=69.73, stdev=22.04 00:19:39.746 clat percentiles (msec): 00:19:39.746 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 52], 00:19:39.746 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:19:39.746 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 105], 00:19:39.746 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 144], 00:19:39.746 | 99.99th=[ 144] 00:19:39.746 bw ( KiB/s): min= 640, max= 1536, per=4.10%, avg=913.60, stdev=211.80, samples=20 00:19:39.746 iops : min= 160, max= 384, avg=228.40, stdev=52.95, samples=20 00:19:39.746 lat (msec) : 20=2.78%, 50=14.78%, 100=74.13%, 250=8.30% 00:19:39.746 cpu : usr=37.38%, sys=1.57%, ctx=1093, majf=0, minf=9 00:19:39.746 IO depths : 1=0.1%, 2=1.9%, 4=7.3%, 8=75.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:39.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.746 filename2: (groupid=0, jobs=1): err= 0: pid=83198: Sun Nov 17 13:29:27 2024 00:19:39.746 read: IOPS=233, BW=936KiB/s (958kB/s)(9412KiB/10057msec) 00:19:39.746 slat (usec): min=5, max=8048, avg=24.80, stdev=218.94 00:19:39.746 clat (msec): min=9, max=142, avg=68.22, stdev=21.58 00:19:39.746 lat (msec): min=9, max=142, avg=68.25, stdev=21.59 00:19:39.746 clat percentiles (msec): 00:19:39.746 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 51], 00:19:39.746 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:19:39.746 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 103], 00:19:39.746 | 99.00th=[ 110], 99.50th=[ 128], 99.90th=[ 133], 99.95th=[ 142], 00:19:39.746 | 99.99th=[ 142] 00:19:39.746 bw ( KiB/s): min= 632, max= 1536, per=4.20%, avg=934.80, stdev=208.51, samples=20 00:19:39.746 iops : min= 158, max= 384, avg=233.70, stdev=52.13, samples=20 00:19:39.746 lat (msec) : 10=0.08%, 20=3.23%, 50=16.19%, 100=73.61%, 250=6.88% 00:19:39.746 cpu : usr=48.29%, sys=2.03%, ctx=1172, majf=0, minf=9 00:19:39.746 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:39.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.746 filename2: (groupid=0, jobs=1): err= 0: pid=83199: Sun Nov 17 13:29:27 2024 00:19:39.746 read: IOPS=246, BW=985KiB/s (1008kB/s)(9852KiB/10005msec) 00:19:39.746 slat (usec): min=4, max=8034, avg=39.95, stdev=363.33 00:19:39.746 clat (msec): min=7, max=118, avg=64.79, stdev=19.71 00:19:39.746 lat (msec): min=7, max=118, avg=64.83, stdev=19.70 00:19:39.746 clat percentiles (msec): 00:19:39.746 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:19:39.746 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 68], 00:19:39.746 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 97], 00:19:39.746 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 117], 99.95th=[ 118], 00:19:39.746 | 99.99th=[ 120] 00:19:39.746 bw ( KiB/s): min= 768, max= 1312, per=4.39%, avg=976.42, stdev=165.94, samples=19 00:19:39.746 iops : min= 192, max= 328, avg=244.11, stdev=41.49, samples=19 00:19:39.746 lat (msec) : 10=0.28%, 20=0.53%, 50=25.94%, 100=69.63%, 250=3.61% 00:19:39.746 cpu : usr=38.73%, sys=1.63%, ctx=1188, majf=0, minf=9 00:19:39.746 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:39.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.746 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:39.746 00:19:39.746 Run status group 0 (all jobs): 00:19:39.746 READ: bw=21.7MiB/s (22.8MB/s), 854KiB/s-997KiB/s (875kB/s-1021kB/s), io=219MiB (230MB), run=10001-10085msec 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 bdev_null0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 [2024-11-17 13:29:27.637123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.746 bdev_null1 00:19:39.746 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:39.747 { 00:19:39.747 "params": { 00:19:39.747 "name": "Nvme$subsystem", 00:19:39.747 "trtype": "$TEST_TRANSPORT", 00:19:39.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.747 "adrfam": "ipv4", 00:19:39.747 "trsvcid": "$NVMF_PORT", 00:19:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.747 "hdgst": ${hdgst:-false}, 00:19:39.747 "ddgst": ${ddgst:-false} 00:19:39.747 }, 00:19:39.747 "method": "bdev_nvme_attach_controller" 00:19:39.747 } 00:19:39.747 EOF 00:19:39.747 )") 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:39.747 { 00:19:39.747 "params": { 00:19:39.747 "name": "Nvme$subsystem", 00:19:39.747 "trtype": "$TEST_TRANSPORT", 00:19:39.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.747 "adrfam": "ipv4", 00:19:39.747 "trsvcid": "$NVMF_PORT", 00:19:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.747 "hdgst": ${hdgst:-false}, 00:19:39.747 "ddgst": ${ddgst:-false} 00:19:39.747 }, 00:19:39.747 "method": "bdev_nvme_attach_controller" 00:19:39.747 } 00:19:39.747 EOF 00:19:39.747 )") 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:39.747 "params": { 00:19:39.747 "name": "Nvme0", 00:19:39.747 "trtype": "tcp", 00:19:39.747 "traddr": "10.0.0.3", 00:19:39.747 "adrfam": "ipv4", 00:19:39.747 "trsvcid": "4420", 00:19:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:39.747 "hdgst": false, 00:19:39.747 "ddgst": false 00:19:39.747 }, 00:19:39.747 "method": "bdev_nvme_attach_controller" 00:19:39.747 },{ 00:19:39.747 "params": { 00:19:39.747 "name": "Nvme1", 00:19:39.747 "trtype": "tcp", 00:19:39.747 "traddr": "10.0.0.3", 00:19:39.747 "adrfam": "ipv4", 00:19:39.747 "trsvcid": "4420", 00:19:39.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.747 "hdgst": false, 00:19:39.747 "ddgst": false 00:19:39.747 }, 00:19:39.747 "method": "bdev_nvme_attach_controller" 00:19:39.747 }' 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.747 13:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:39.747 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:39.747 ... 00:19:39.747 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:39.747 ... 00:19:39.747 fio-3.35 00:19:39.747 Starting 4 threads 00:19:45.015 00:19:45.015 filename0: (groupid=0, jobs=1): err= 0: pid=83341: Sun Nov 17 13:29:33 2024 00:19:45.015 read: IOPS=2169, BW=16.9MiB/s (17.8MB/s)(84.8MiB/5003msec) 00:19:45.015 slat (nsec): min=5843, max=91540, avg=16344.04, stdev=9641.99 00:19:45.015 clat (usec): min=888, max=6060, avg=3629.54, stdev=759.39 00:19:45.015 lat (usec): min=896, max=6083, avg=3645.88, stdev=758.46 00:19:45.015 clat percentiles (usec): 00:19:45.015 | 1.00th=[ 1582], 5.00th=[ 2114], 10.00th=[ 2343], 20.00th=[ 2900], 00:19:45.015 | 30.00th=[ 3425], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4047], 00:19:45.015 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4359], 00:19:45.015 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 5604], 99.95th=[ 5997], 00:19:45.015 | 99.99th=[ 6063] 00:19:45.015 bw ( KiB/s): min=14976, max=22483, per=21.82%, avg=17116.78, stdev=2495.23, samples=9 00:19:45.015 iops : min= 1872, max= 2810, avg=2139.56, stdev=311.80, samples=9 00:19:45.015 lat (usec) : 1000=0.12% 00:19:45.015 lat (msec) : 2=3.35%, 4=47.36%, 10=49.17% 00:19:45.015 cpu : usr=94.74%, sys=4.36%, ctx=14, majf=0, minf=0 00:19:45.015 IO depths : 1=0.5%, 2=15.6%, 4=55.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 issued rwts: total=10852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.015 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.015 filename0: (groupid=0, jobs=1): err= 0: pid=83342: Sun Nov 17 13:29:33 2024 00:19:45.015 read: IOPS=2533, BW=19.8MiB/s (20.8MB/s)(99.0MiB/5002msec) 00:19:45.015 slat (nsec): min=5127, max=86940, avg=18301.26, stdev=8688.99 00:19:45.015 clat (usec): min=880, max=6183, avg=3110.57, stdev=876.34 00:19:45.015 lat (usec): min=889, max=6197, avg=3128.87, stdev=875.29 00:19:45.015 clat percentiles (usec): 00:19:45.015 | 1.00th=[ 1795], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2180], 00:19:45.015 | 30.00th=[ 2245], 40.00th=[ 2573], 50.00th=[ 3195], 60.00th=[ 3752], 00:19:45.015 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4178], 95.00th=[ 4293], 00:19:45.015 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5211], 00:19:45.015 | 99.99th=[ 5800] 00:19:45.015 bw ( KiB/s): min=19488, max=21456, per=26.30%, avg=20629.33, stdev=793.41, samples=9 00:19:45.015 iops : min= 2436, max= 2682, avg=2578.67, stdev=99.18, samples=9 00:19:45.015 lat (usec) : 1000=0.02% 00:19:45.015 lat (msec) : 2=5.28%, 4=74.87%, 10=19.83% 00:19:45.015 cpu : usr=93.84%, sys=5.24%, ctx=5, majf=0, minf=9 00:19:45.015 IO depths : 1=0.5%, 2=3.6%, 4=62.2%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 complete : 0=0.0%, 4=98.7%, 8=1.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 issued rwts: total=12672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.015 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.015 filename1: (groupid=0, jobs=1): err= 0: pid=83343: Sun Nov 17 13:29:33 2024 00:19:45.015 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.6MiB/5001msec) 00:19:45.015 slat (usec): min=4, max=658, avg=18.06, stdev=11.25 00:19:45.015 clat (usec): min=562, max=5560, avg=3122.67, stdev=929.55 00:19:45.015 lat (usec): min=573, max=5581, avg=3140.73, stdev=928.39 00:19:45.015 clat percentiles (usec): 00:19:45.015 | 1.00th=[ 1680], 5.00th=[ 1958], 10.00th=[ 2040], 20.00th=[ 2180], 00:19:45.015 | 30.00th=[ 2245], 40.00th=[ 2507], 50.00th=[ 3195], 60.00th=[ 3785], 00:19:45.015 | 70.00th=[ 3884], 80.00th=[ 4015], 90.00th=[ 4228], 95.00th=[ 4424], 00:19:45.015 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5211], 99.95th=[ 5276], 00:19:45.015 | 99.99th=[ 5538] 00:19:45.015 bw ( KiB/s): min=15503, max=21888, per=25.88%, avg=20300.33, stdev=1979.13, samples=9 00:19:45.015 iops : min= 1937, max= 2736, avg=2537.44, stdev=247.66, samples=9 00:19:45.015 lat (usec) : 750=0.03%, 1000=0.17% 00:19:45.015 lat (msec) : 2=6.95%, 4=71.27%, 10=21.58% 00:19:45.015 cpu : usr=93.32%, sys=5.40%, ctx=45, majf=0, minf=9 00:19:45.015 IO depths : 1=0.4%, 2=3.2%, 4=62.0%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.015 issued rwts: total=12626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.015 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.015 filename1: (groupid=0, jobs=1): err= 0: pid=83344: Sun Nov 17 13:29:33 2024 00:19:45.015 read: IOPS=2580, BW=20.2MiB/s (21.1MB/s)(101MiB/5001msec) 00:19:45.016 slat (nsec): min=4943, max=78532, avg=16535.36, stdev=8458.71 00:19:45.016 clat (usec): min=907, max=6279, avg=3057.79, stdev=866.37 00:19:45.016 lat (usec): min=915, max=6302, avg=3074.32, stdev=866.19 00:19:45.016 clat percentiles (usec): 00:19:45.016 | 1.00th=[ 1729], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2147], 00:19:45.016 | 30.00th=[ 2245], 40.00th=[ 2442], 50.00th=[ 3064], 60.00th=[ 3589], 00:19:45.016 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4228], 00:19:45.016 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 4752], 99.95th=[ 4948], 00:19:45.016 | 99.99th=[ 5800] 00:19:45.016 bw ( KiB/s): min=19376, max=22016, per=26.52%, avg=20805.33, stdev=937.88, samples=9 00:19:45.016 iops : min= 2422, max= 2752, avg=2600.67, stdev=117.23, samples=9 00:19:45.016 lat (usec) : 1000=0.02% 00:19:45.016 lat (msec) : 2=5.79%, 4=76.49%, 10=17.71% 00:19:45.016 cpu : usr=91.90%, sys=6.90%, ctx=6, majf=0, minf=10 00:19:45.016 IO depths : 1=0.5%, 2=2.4%, 4=62.9%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.016 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.016 issued rwts: total=12903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.016 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:45.016 00:19:45.016 Run status group 0 (all jobs): 00:19:45.016 READ: bw=76.6MiB/s (80.3MB/s), 16.9MiB/s-20.2MiB/s (17.8MB/s-21.1MB/s), io=383MiB (402MB), run=5001-5003msec 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 ************************************ 00:19:45.016 END TEST fio_dif_rand_params 00:19:45.016 ************************************ 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 00:19:45.016 real 0m23.883s 00:19:45.016 user 2m6.288s 00:19:45.016 sys 0m6.623s 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:45.016 13:29:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:45.016 13:29:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 ************************************ 00:19:45.016 START TEST fio_dif_digest 00:19:45.016 ************************************ 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 bdev_null0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 [2024-11-17 13:29:33.855596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:45.016 { 00:19:45.016 "params": { 00:19:45.016 "name": "Nvme$subsystem", 00:19:45.016 "trtype": "$TEST_TRANSPORT", 00:19:45.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:45.016 "adrfam": "ipv4", 00:19:45.016 "trsvcid": "$NVMF_PORT", 00:19:45.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:45.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:45.016 "hdgst": ${hdgst:-false}, 00:19:45.016 "ddgst": ${ddgst:-false} 00:19:45.016 }, 00:19:45.016 "method": "bdev_nvme_attach_controller" 00:19:45.016 } 00:19:45.016 EOF 00:19:45.016 )") 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:19:45.016 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:45.017 "params": { 00:19:45.017 "name": "Nvme0", 00:19:45.017 "trtype": "tcp", 00:19:45.017 "traddr": "10.0.0.3", 00:19:45.017 "adrfam": "ipv4", 00:19:45.017 "trsvcid": "4420", 00:19:45.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:45.017 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:45.017 "hdgst": true, 00:19:45.017 "ddgst": true 00:19:45.017 }, 00:19:45.017 "method": "bdev_nvme_attach_controller" 00:19:45.017 }' 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:45.017 13:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:45.017 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:45.017 ... 00:19:45.017 fio-3.35 00:19:45.017 Starting 3 threads 00:19:57.217 00:19:57.217 filename0: (groupid=0, jobs=1): err= 0: pid=83450: Sun Nov 17 13:29:44 2024 00:19:57.217 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(342MiB/10002msec) 00:19:57.217 slat (nsec): min=6076, max=89950, avg=20736.18, stdev=10494.76 00:19:57.217 clat (usec): min=7600, max=12121, avg=10922.80, stdev=262.99 00:19:57.217 lat (usec): min=7609, max=12160, avg=10943.54, stdev=263.51 00:19:57.217 clat percentiles (usec): 00:19:57.217 | 1.00th=[10683], 5.00th=[10683], 10.00th=[10814], 20.00th=[10814], 00:19:57.217 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:19:57.217 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:19:57.217 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[12125], 00:19:57.217 | 99.99th=[12125] 00:19:57.217 bw ( KiB/s): min=34560, max=35328, per=33.37%, avg=35045.05, stdev=380.62, samples=19 00:19:57.217 iops : min= 270, max= 276, avg=273.79, stdev= 2.97, samples=19 00:19:57.217 lat (msec) : 10=0.22%, 20=99.78% 00:19:57.217 cpu : usr=94.72%, sys=4.77%, ctx=17, majf=0, minf=0 00:19:57.217 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:57.217 filename0: (groupid=0, jobs=1): err= 0: pid=83451: Sun Nov 17 13:29:44 2024 00:19:57.217 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(342MiB/10005msec) 00:19:57.217 slat (nsec): min=6152, max=84200, avg=26094.43, stdev=13253.86 00:19:57.217 clat (usec): min=7680, max=12053, avg=10911.09, stdev=252.88 00:19:57.217 lat (usec): min=7687, max=12088, avg=10937.18, stdev=254.20 00:19:57.217 clat percentiles (usec): 00:19:57.217 | 1.00th=[10683], 5.00th=[10683], 10.00th=[10683], 20.00th=[10814], 00:19:57.217 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:19:57.217 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:19:57.217 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:19:57.217 | 99.99th=[11994] 00:19:57.217 bw ( KiB/s): min=34560, max=35328, per=33.33%, avg=35004.63, stdev=389.57, samples=19 00:19:57.217 iops : min= 270, max= 276, avg=273.47, stdev= 3.04, samples=19 00:19:57.217 lat (msec) : 10=0.11%, 20=99.89% 00:19:57.217 cpu : usr=94.42%, sys=5.04%, ctx=24, majf=0, minf=0 00:19:57.217 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:57.217 filename0: (groupid=0, jobs=1): err= 0: pid=83452: Sun Nov 17 13:29:44 2024 00:19:57.217 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(342MiB/10005msec) 00:19:57.217 slat (nsec): min=6172, max=88635, avg=25998.49, stdev=13275.95 00:19:57.217 clat (usec): min=7690, max=12077, avg=10912.72, stdev=253.86 00:19:57.217 lat (usec): min=7702, max=12111, avg=10938.72, stdev=255.09 00:19:57.217 clat percentiles (usec): 00:19:57.217 | 1.00th=[10683], 5.00th=[10683], 10.00th=[10683], 20.00th=[10814], 00:19:57.217 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:19:57.217 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:19:57.217 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:19:57.217 | 99.99th=[12125] 00:19:57.217 bw ( KiB/s): min=34560, max=35328, per=33.33%, avg=35004.63, stdev=389.57, samples=19 00:19:57.217 iops : min= 270, max= 276, avg=273.47, stdev= 3.04, samples=19 00:19:57.217 lat (msec) : 10=0.11%, 20=99.89% 00:19:57.217 cpu : usr=94.65%, sys=4.80%, ctx=13, majf=0, minf=0 00:19:57.217 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.217 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.217 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:57.217 00:19:57.217 Run status group 0 (all jobs): 00:19:57.217 READ: bw=103MiB/s (108MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.9MB/s), io=1026MiB (1076MB), run=10002-10005msec 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:57.217 ************************************ 00:19:57.217 END TEST fio_dif_digest 00:19:57.217 ************************************ 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.217 00:19:57.217 real 0m11.128s 00:19:57.217 user 0m29.126s 00:19:57.217 sys 0m1.793s 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.217 13:29:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:57.217 13:29:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:57.217 13:29:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:57.217 13:29:44 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.217 13:29:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.217 rmmod nvme_tcp 00:19:57.217 rmmod nvme_fabrics 00:19:57.217 rmmod nvme_keyring 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82694 ']' 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82694 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82694 ']' 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82694 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82694 00:19:57.217 killing process with pid 82694 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82694' 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82694 00:19:57.217 13:29:45 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82694 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:57.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:57.217 Waiting for block devices as requested 00:19:57.217 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:57.217 13:29:45 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.217 13:29:46 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:57.217 13:29:46 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:57.217 13:29:46 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.218 13:29:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:57.218 13:29:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.218 13:29:46 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:19:57.218 00:19:57.218 real 1m0.171s 00:19:57.218 user 3m49.378s 00:19:57.218 sys 0m18.871s 00:19:57.218 13:29:46 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.218 ************************************ 00:19:57.218 END TEST nvmf_dif 00:19:57.218 ************************************ 00:19:57.218 13:29:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:57.218 13:29:46 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:57.218 13:29:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:57.218 13:29:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.218 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:19:57.218 ************************************ 00:19:57.218 START TEST nvmf_abort_qd_sizes 00:19:57.218 ************************************ 00:19:57.218 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:57.218 * Looking for test storage... 00:19:57.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:57.218 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.218 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.218 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.477 --rc genhtml_branch_coverage=1 00:19:57.477 --rc genhtml_function_coverage=1 00:19:57.477 --rc genhtml_legend=1 00:19:57.477 --rc geninfo_all_blocks=1 00:19:57.477 --rc geninfo_unexecuted_blocks=1 00:19:57.477 00:19:57.477 ' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.477 --rc genhtml_branch_coverage=1 00:19:57.477 --rc genhtml_function_coverage=1 00:19:57.477 --rc genhtml_legend=1 00:19:57.477 --rc geninfo_all_blocks=1 00:19:57.477 --rc geninfo_unexecuted_blocks=1 00:19:57.477 00:19:57.477 ' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.477 --rc genhtml_branch_coverage=1 00:19:57.477 --rc genhtml_function_coverage=1 00:19:57.477 --rc genhtml_legend=1 00:19:57.477 --rc geninfo_all_blocks=1 00:19:57.477 --rc geninfo_unexecuted_blocks=1 00:19:57.477 00:19:57.477 ' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.477 --rc genhtml_branch_coverage=1 00:19:57.477 --rc genhtml_function_coverage=1 00:19:57.477 --rc genhtml_legend=1 00:19:57.477 --rc geninfo_all_blocks=1 00:19:57.477 --rc geninfo_unexecuted_blocks=1 00:19:57.477 00:19:57.477 ' 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.477 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:57.478 Cannot find device "nvmf_init_br" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:57.478 Cannot find device "nvmf_init_br2" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:57.478 Cannot find device "nvmf_tgt_br" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.478 Cannot find device "nvmf_tgt_br2" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:57.478 Cannot find device "nvmf_init_br" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:57.478 Cannot find device "nvmf_init_br2" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:57.478 Cannot find device "nvmf_tgt_br" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:57.478 Cannot find device "nvmf_tgt_br2" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:57.478 Cannot find device "nvmf_br" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:57.478 Cannot find device "nvmf_init_if" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:57.478 Cannot find device "nvmf_init_if2" 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.478 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:57.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:57.737 00:19:57.737 --- 10.0.0.3 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:57.737 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:19:57.737 00:19:57.737 --- 10.0.0.4 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:57.737 00:19:57.737 --- 10.0.0.1 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:57.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:57.737 00:19:57.737 --- 10.0.0.2 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:57.737 13:29:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:58.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.564 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.564 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.564 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.564 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.564 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.564 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.565 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.565 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:58.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84101 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84101 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84101 ']' 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.823 13:29:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:58.824 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.824 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.824 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.824 13:29:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:58.824 [2024-11-17 13:29:47.863145] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:58.824 [2024-11-17 13:29:47.863238] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.824 [2024-11-17 13:29:48.018414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.082 [2024-11-17 13:29:48.087663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.082 [2024-11-17 13:29:48.088044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.082 [2024-11-17 13:29:48.088234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.082 [2024-11-17 13:29:48.088516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.082 [2024-11-17 13:29:48.088566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.082 [2024-11-17 13:29:48.090259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.082 [2024-11-17 13:29:48.090411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.082 [2024-11-17 13:29:48.090485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.082 [2024-11-17 13:29:48.090483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.082 [2024-11-17 13:29:48.174104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:19:59.082 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:19:59.341 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:19:59.341 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:19:59.341 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 ************************************ 00:19:59.342 START TEST spdk_target_abort 00:19:59.342 ************************************ 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 spdk_targetn1 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 [2024-11-17 13:29:48.418172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 [2024-11-17 13:29:48.455922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:59.342 13:29:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:02.629 Initializing NVMe Controllers 00:20:02.629 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:02.629 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:02.629 Initialization complete. Launching workers. 00:20:02.629 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9037, failed: 0 00:20:02.629 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1031, failed to submit 8006 00:20:02.629 success 685, unsuccessful 346, failed 0 00:20:02.629 13:29:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:02.629 13:29:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:05.916 Initializing NVMe Controllers 00:20:05.916 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:05.916 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:05.916 Initialization complete. Launching workers. 00:20:05.916 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:20:05.916 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7786 00:20:05.916 success 362, unsuccessful 852, failed 0 00:20:05.916 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:05.916 13:29:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:09.226 Initializing NVMe Controllers 00:20:09.226 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:09.226 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:09.226 Initialization complete. Launching workers. 00:20:09.226 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30898, failed: 0 00:20:09.226 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2404, failed to submit 28494 00:20:09.226 success 312, unsuccessful 2092, failed 0 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.226 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84101 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84101 ']' 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84101 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84101 00:20:09.794 killing process with pid 84101 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84101' 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84101 00:20:09.794 13:29:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84101 00:20:10.053 ************************************ 00:20:10.053 END TEST spdk_target_abort 00:20:10.053 ************************************ 00:20:10.053 00:20:10.053 real 0m10.785s 00:20:10.053 user 0m41.663s 00:20:10.053 sys 0m2.072s 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:10.053 13:29:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:10.053 13:29:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:10.053 13:29:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.053 13:29:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:10.053 ************************************ 00:20:10.053 START TEST kernel_target_abort 00:20:10.053 ************************************ 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:10.053 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:10.054 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:10.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:10.621 Waiting for block devices as requested 00:20:10.621 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.621 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:10.621 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:10.621 No valid GPT data, bailing 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:10.880 No valid GPT data, bailing 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:10.880 No valid GPT data, bailing 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:10.880 13:29:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:10.880 No valid GPT data, bailing 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:10.880 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba --hostid=c87b64e3-aa64-4edb-937d-9804b9d918ba -a 10.0.0.1 -t tcp -s 4420 00:20:11.139 00:20:11.139 Discovery Log Number of Records 2, Generation counter 2 00:20:11.139 =====Discovery Log Entry 0====== 00:20:11.139 trtype: tcp 00:20:11.139 adrfam: ipv4 00:20:11.139 subtype: current discovery subsystem 00:20:11.139 treq: not specified, sq flow control disable supported 00:20:11.139 portid: 1 00:20:11.139 trsvcid: 4420 00:20:11.139 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:11.139 traddr: 10.0.0.1 00:20:11.139 eflags: none 00:20:11.139 sectype: none 00:20:11.139 =====Discovery Log Entry 1====== 00:20:11.139 trtype: tcp 00:20:11.139 adrfam: ipv4 00:20:11.139 subtype: nvme subsystem 00:20:11.139 treq: not specified, sq flow control disable supported 00:20:11.139 portid: 1 00:20:11.139 trsvcid: 4420 00:20:11.139 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:11.139 traddr: 10.0.0.1 00:20:11.139 eflags: none 00:20:11.139 sectype: none 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:11.139 13:30:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:14.483 Initializing NVMe Controllers 00:20:14.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:14.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:14.483 Initialization complete. Launching workers. 00:20:14.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36164, failed: 0 00:20:14.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36164, failed to submit 0 00:20:14.483 success 0, unsuccessful 36164, failed 0 00:20:14.483 13:30:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:14.483 13:30:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:17.770 Initializing NVMe Controllers 00:20:17.770 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:17.770 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:17.770 Initialization complete. Launching workers. 00:20:17.770 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84740, failed: 0 00:20:17.770 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36888, failed to submit 47852 00:20:17.770 success 0, unsuccessful 36888, failed 0 00:20:17.770 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:17.770 13:30:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:21.055 Initializing NVMe Controllers 00:20:21.055 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:21.055 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:21.055 Initialization complete. Launching workers. 00:20:21.055 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103466, failed: 0 00:20:21.055 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25876, failed to submit 77590 00:20:21.055 success 0, unsuccessful 25876, failed 0 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:21.055 13:30:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:21.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.850 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:23.850 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.109 00:20:24.109 real 0m13.918s 00:20:24.109 user 0m6.184s 00:20:24.109 sys 0m5.039s 00:20:24.109 13:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.109 13:30:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:24.109 ************************************ 00:20:24.109 END TEST kernel_target_abort 00:20:24.109 ************************************ 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.109 rmmod nvme_tcp 00:20:24.109 rmmod nvme_fabrics 00:20:24.109 rmmod nvme_keyring 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84101 ']' 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84101 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84101 ']' 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84101 00:20:24.109 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84101) - No such process 00:20:24.109 Process with pid 84101 is not found 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84101 is not found' 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:24.109 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:24.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.627 Waiting for block devices as requested 00:20:24.627 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:24.627 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:24.886 13:30:13 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:24.886 00:20:24.886 real 0m27.822s 00:20:24.886 user 0m48.983s 00:20:24.886 sys 0m8.619s 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.886 ************************************ 00:20:24.886 END TEST nvmf_abort_qd_sizes 00:20:24.886 ************************************ 00:20:24.886 13:30:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:25.145 13:30:14 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:25.145 13:30:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.145 13:30:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.145 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:25.145 ************************************ 00:20:25.145 START TEST keyring_file 00:20:25.145 ************************************ 00:20:25.145 13:30:14 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:25.145 * Looking for test storage... 00:20:25.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:25.145 13:30:14 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.145 13:30:14 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.145 13:30:14 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.145 13:30:14 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:25.145 13:30:14 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.146 13:30:14 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:25.146 13:30:14 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.146 13:30:14 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.146 --rc genhtml_branch_coverage=1 00:20:25.146 --rc genhtml_function_coverage=1 00:20:25.146 --rc genhtml_legend=1 00:20:25.146 --rc geninfo_all_blocks=1 00:20:25.146 --rc geninfo_unexecuted_blocks=1 00:20:25.146 00:20:25.146 ' 00:20:25.146 13:30:14 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.146 --rc genhtml_branch_coverage=1 00:20:25.146 --rc genhtml_function_coverage=1 00:20:25.146 --rc genhtml_legend=1 00:20:25.146 --rc geninfo_all_blocks=1 00:20:25.146 --rc geninfo_unexecuted_blocks=1 00:20:25.146 00:20:25.146 ' 00:20:25.146 13:30:14 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.146 --rc genhtml_branch_coverage=1 00:20:25.146 --rc genhtml_function_coverage=1 00:20:25.146 --rc genhtml_legend=1 00:20:25.146 --rc geninfo_all_blocks=1 00:20:25.146 --rc geninfo_unexecuted_blocks=1 00:20:25.146 00:20:25.146 ' 00:20:25.146 13:30:14 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.146 --rc genhtml_branch_coverage=1 00:20:25.146 --rc genhtml_function_coverage=1 00:20:25.146 --rc genhtml_legend=1 00:20:25.146 --rc geninfo_all_blocks=1 00:20:25.146 --rc geninfo_unexecuted_blocks=1 00:20:25.146 00:20:25.146 ' 00:20:25.146 13:30:14 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:25.146 13:30:14 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.146 13:30:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.405 13:30:14 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.405 13:30:14 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.405 13:30:14 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.405 13:30:14 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.405 13:30:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.405 13:30:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.405 13:30:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.405 13:30:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:25.405 13:30:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.405 13:30:14 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mUpPC2APfc 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mUpPC2APfc 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mUpPC2APfc 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mUpPC2APfc 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d2ZWChrD1K 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:25.406 13:30:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d2ZWChrD1K 00:20:25.406 13:30:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d2ZWChrD1K 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.d2ZWChrD1K 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=85010 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.406 13:30:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85010 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85010 ']' 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.406 13:30:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:25.406 [2024-11-17 13:30:14.579412] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:25.406 [2024-11-17 13:30:14.580184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85010 ] 00:20:25.665 [2024-11-17 13:30:14.735796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.665 [2024-11-17 13:30:14.800833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.924 [2024-11-17 13:30:14.905832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:26.183 13:30:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:26.183 [2024-11-17 13:30:15.195518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.183 null0 00:20:26.183 [2024-11-17 13:30:15.227485] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.183 [2024-11-17 13:30:15.227690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.183 13:30:15 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:26.183 [2024-11-17 13:30:15.255466] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:26.183 request: 00:20:26.183 { 00:20:26.183 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.183 "secure_channel": false, 00:20:26.183 "listen_address": { 00:20:26.183 "trtype": "tcp", 00:20:26.183 "traddr": "127.0.0.1", 00:20:26.183 "trsvcid": "4420" 00:20:26.183 }, 00:20:26.183 "method": "nvmf_subsystem_add_listener", 00:20:26.183 "req_id": 1 00:20:26.183 } 00:20:26.183 Got JSON-RPC error response 00:20:26.183 response: 00:20:26.183 { 00:20:26.183 "code": -32602, 00:20:26.183 "message": "Invalid parameters" 00:20:26.183 } 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.183 13:30:15 keyring_file -- keyring/file.sh@47 -- # bperfpid=85020 00:20:26.183 13:30:15 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85020 /var/tmp/bperf.sock 00:20:26.183 13:30:15 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85020 ']' 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.183 13:30:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:26.183 [2024-11-17 13:30:15.321716] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:26.183 [2024-11-17 13:30:15.321830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85020 ] 00:20:26.442 [2024-11-17 13:30:15.473278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.442 [2024-11-17 13:30:15.529300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.442 [2024-11-17 13:30:15.585773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.379 13:30:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.379 13:30:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:27.379 13:30:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:27.379 13:30:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:27.379 13:30:16 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d2ZWChrD1K 00:20:27.379 13:30:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d2ZWChrD1K 00:20:27.637 13:30:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:27.637 13:30:16 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:20:27.637 13:30:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:27.637 13:30:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:27.637 13:30:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:27.896 13:30:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mUpPC2APfc == \/\t\m\p\/\t\m\p\.\m\U\p\P\C\2\A\P\f\c ]] 00:20:27.896 13:30:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:20:27.896 13:30:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:20:27.896 13:30:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:27.896 13:30:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:27.896 13:30:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:28.154 13:30:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.d2ZWChrD1K == \/\t\m\p\/\t\m\p\.\d\2\Z\W\C\h\r\D\1\K ]] 00:20:28.154 13:30:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:20:28.154 13:30:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:28.154 13:30:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:28.154 13:30:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:28.154 13:30:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:28.154 13:30:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:28.413 13:30:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:28.413 13:30:17 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:20:28.413 13:30:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:28.413 13:30:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:28.413 13:30:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:28.413 13:30:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:28.413 13:30:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:28.981 13:30:17 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:20:28.981 13:30:17 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.981 13:30:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.981 [2024-11-17 13:30:18.170525] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.240 nvme0n1 00:20:29.240 13:30:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:20:29.240 13:30:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:29.240 13:30:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:29.240 13:30:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:29.240 13:30:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:29.240 13:30:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:29.499 13:30:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:20:29.499 13:30:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:20:29.499 13:30:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:29.499 13:30:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:29.499 13:30:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:29.499 13:30:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:29.499 13:30:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:29.757 13:30:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:20:29.757 13:30:18 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:29.757 Running I/O for 1 seconds... 00:20:30.695 13784.00 IOPS, 53.84 MiB/s 00:20:30.695 Latency(us) 00:20:30.695 [2024-11-17T13:30:19.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.695 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:30.695 nvme0n1 : 1.01 13836.26 54.05 0.00 0.00 9230.65 3768.32 21090.68 00:20:30.695 [2024-11-17T13:30:19.919Z] =================================================================================================================== 00:20:30.695 [2024-11-17T13:30:19.919Z] Total : 13836.26 54.05 0.00 0.00 9230.65 3768.32 21090.68 00:20:30.695 { 00:20:30.695 "results": [ 00:20:30.695 { 00:20:30.695 "job": "nvme0n1", 00:20:30.695 "core_mask": "0x2", 00:20:30.695 "workload": "randrw", 00:20:30.695 "percentage": 50, 00:20:30.695 "status": "finished", 00:20:30.695 "queue_depth": 128, 00:20:30.695 "io_size": 4096, 00:20:30.695 "runtime": 1.005546, 00:20:30.695 "iops": 13836.264079415561, 00:20:30.695 "mibps": 54.04790656021704, 00:20:30.695 "io_failed": 0, 00:20:30.695 "io_timeout": 0, 00:20:30.695 "avg_latency_us": 9230.651263239743, 00:20:30.695 "min_latency_us": 3768.32, 00:20:30.695 "max_latency_us": 21090.676363636365 00:20:30.695 } 00:20:30.695 ], 00:20:30.695 "core_count": 1 00:20:30.695 } 00:20:30.695 13:30:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:30.695 13:30:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:31.262 13:30:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.262 13:30:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:31.262 13:30:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:31.262 13:30:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.521 13:30:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:20:31.521 13:30:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:31.521 13:30:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.522 13:30:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:31.522 13:30:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:31.780 [2024-11-17 13:30:20.962618] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:31.780 [2024-11-17 13:30:20.963346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a770 (107): Transport endpoint is not connected 00:20:31.780 [2024-11-17 13:30:20.964317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a770 (9): Bad file descriptor 00:20:31.781 [2024-11-17 13:30:20.965314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:31.781 [2024-11-17 13:30:20.965353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:31.781 [2024-11-17 13:30:20.965363] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:31.781 [2024-11-17 13:30:20.965372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:31.781 request: 00:20:31.781 { 00:20:31.781 "name": "nvme0", 00:20:31.781 "trtype": "tcp", 00:20:31.781 "traddr": "127.0.0.1", 00:20:31.781 "adrfam": "ipv4", 00:20:31.781 "trsvcid": "4420", 00:20:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:31.781 "prchk_reftag": false, 00:20:31.781 "prchk_guard": false, 00:20:31.781 "hdgst": false, 00:20:31.781 "ddgst": false, 00:20:31.781 "psk": "key1", 00:20:31.781 "allow_unrecognized_csi": false, 00:20:31.781 "method": "bdev_nvme_attach_controller", 00:20:31.781 "req_id": 1 00:20:31.781 } 00:20:31.781 Got JSON-RPC error response 00:20:31.781 response: 00:20:31.781 { 00:20:31.781 "code": -5, 00:20:31.781 "message": "Input/output error" 00:20:31.781 } 00:20:31.781 13:30:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:31.781 13:30:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.781 13:30:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.781 13:30:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.781 13:30:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:20:31.781 13:30:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:31.781 13:30:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:31.781 13:30:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.781 13:30:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.781 13:30:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:32.039 13:30:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:32.039 13:30:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:20:32.039 13:30:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:32.039 13:30:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:32.039 13:30:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:32.040 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:32.040 13:30:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:32.298 13:30:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:20:32.298 13:30:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:20:32.298 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:32.557 13:30:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:20:32.557 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:32.816 13:30:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:20:32.816 13:30:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:20:32.816 13:30:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:33.074 13:30:22 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:20:33.074 13:30:22 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.mUpPC2APfc 00:20:33.074 13:30:22 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.075 13:30:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.075 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.333 [2024-11-17 13:30:22.359666] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mUpPC2APfc': 0100660 00:20:33.333 [2024-11-17 13:30:22.359700] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:33.333 request: 00:20:33.333 { 00:20:33.333 "name": "key0", 00:20:33.333 "path": "/tmp/tmp.mUpPC2APfc", 00:20:33.333 "method": "keyring_file_add_key", 00:20:33.333 "req_id": 1 00:20:33.333 } 00:20:33.333 Got JSON-RPC error response 00:20:33.333 response: 00:20:33.333 { 00:20:33.333 "code": -1, 00:20:33.333 "message": "Operation not permitted" 00:20:33.334 } 00:20:33.334 13:30:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:33.334 13:30:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.334 13:30:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.334 13:30:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.334 13:30:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.mUpPC2APfc 00:20:33.334 13:30:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.334 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mUpPC2APfc 00:20:33.592 13:30:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.mUpPC2APfc 00:20:33.592 13:30:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:20:33.592 13:30:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:33.592 13:30:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:33.592 13:30:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:33.592 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:33.592 13:30:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:33.860 13:30:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:20:33.860 13:30:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:33.860 13:30:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:33.860 13:30:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:33.861 [2024-11-17 13:30:23.079849] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mUpPC2APfc': No such file or directory 00:20:33.861 [2024-11-17 13:30:23.079889] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:33.861 [2024-11-17 13:30:23.079907] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:33.861 [2024-11-17 13:30:23.079915] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:20:33.861 [2024-11-17 13:30:23.079930] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:33.861 [2024-11-17 13:30:23.079938] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:34.128 request: 00:20:34.128 { 00:20:34.128 "name": "nvme0", 00:20:34.128 "trtype": "tcp", 00:20:34.128 "traddr": "127.0.0.1", 00:20:34.128 "adrfam": "ipv4", 00:20:34.128 "trsvcid": "4420", 00:20:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.128 "prchk_reftag": false, 00:20:34.128 "prchk_guard": false, 00:20:34.128 "hdgst": false, 00:20:34.128 "ddgst": false, 00:20:34.128 "psk": "key0", 00:20:34.128 "allow_unrecognized_csi": false, 00:20:34.128 "method": "bdev_nvme_attach_controller", 00:20:34.128 "req_id": 1 00:20:34.128 } 00:20:34.128 Got JSON-RPC error response 00:20:34.128 response: 00:20:34.128 { 00:20:34.128 "code": -19, 00:20:34.128 "message": "No such device" 00:20:34.128 } 00:20:34.128 13:30:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:34.128 13:30:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.128 13:30:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.128 13:30:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.128 13:30:23 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:34.128 13:30:23 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1kpi53yBPX 00:20:34.128 13:30:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:34.128 13:30:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:34.387 13:30:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1kpi53yBPX 00:20:34.387 13:30:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1kpi53yBPX 00:20:34.387 13:30:23 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1kpi53yBPX 00:20:34.387 13:30:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1kpi53yBPX 00:20:34.387 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1kpi53yBPX 00:20:34.646 13:30:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:34.646 13:30:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:34.905 nvme0n1 00:20:34.905 13:30:24 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:20:34.905 13:30:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:34.905 13:30:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:34.905 13:30:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:34.905 13:30:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:34.905 13:30:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:35.164 13:30:24 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:20:35.164 13:30:24 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:20:35.164 13:30:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:35.423 13:30:24 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:20:35.423 13:30:24 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:20:35.423 13:30:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:35.423 13:30:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:35.423 13:30:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:35.682 13:30:24 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:20:35.682 13:30:24 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:20:35.682 13:30:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:35.682 13:30:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:35.682 13:30:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:35.682 13:30:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:35.682 13:30:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:35.941 13:30:25 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:20:35.941 13:30:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:35.941 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:36.200 13:30:25 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:20:36.200 13:30:25 keyring_file -- keyring/file.sh@105 -- # jq length 00:20:36.200 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:36.459 13:30:25 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:20:36.459 13:30:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1kpi53yBPX 00:20:36.459 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1kpi53yBPX 00:20:36.717 13:30:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d2ZWChrD1K 00:20:36.717 13:30:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d2ZWChrD1K 00:20:36.976 13:30:26 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:36.976 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:37.235 nvme0n1 00:20:37.235 13:30:26 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:20:37.235 13:30:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:37.494 13:30:26 keyring_file -- keyring/file.sh@113 -- # config='{ 00:20:37.494 "subsystems": [ 00:20:37.494 { 00:20:37.494 "subsystem": "keyring", 00:20:37.494 "config": [ 00:20:37.494 { 00:20:37.494 "method": "keyring_file_add_key", 00:20:37.494 "params": { 00:20:37.494 "name": "key0", 00:20:37.494 "path": "/tmp/tmp.1kpi53yBPX" 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "keyring_file_add_key", 00:20:37.494 "params": { 00:20:37.494 "name": "key1", 00:20:37.494 "path": "/tmp/tmp.d2ZWChrD1K" 00:20:37.494 } 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "subsystem": "iobuf", 00:20:37.494 "config": [ 00:20:37.494 { 00:20:37.494 "method": "iobuf_set_options", 00:20:37.494 "params": { 00:20:37.494 "small_pool_count": 8192, 00:20:37.494 "large_pool_count": 1024, 00:20:37.494 "small_bufsize": 8192, 00:20:37.494 "large_bufsize": 135168, 00:20:37.494 "enable_numa": false 00:20:37.494 } 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "subsystem": "sock", 00:20:37.494 "config": [ 00:20:37.494 { 00:20:37.494 "method": "sock_set_default_impl", 00:20:37.494 "params": { 00:20:37.494 "impl_name": "uring" 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "sock_impl_set_options", 00:20:37.494 "params": { 00:20:37.494 "impl_name": "ssl", 00:20:37.494 "recv_buf_size": 4096, 00:20:37.494 "send_buf_size": 4096, 00:20:37.494 "enable_recv_pipe": true, 00:20:37.494 "enable_quickack": false, 00:20:37.494 "enable_placement_id": 0, 00:20:37.494 "enable_zerocopy_send_server": true, 00:20:37.494 "enable_zerocopy_send_client": false, 00:20:37.494 "zerocopy_threshold": 0, 00:20:37.494 "tls_version": 0, 00:20:37.494 "enable_ktls": false 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "sock_impl_set_options", 00:20:37.494 "params": { 00:20:37.494 "impl_name": "posix", 00:20:37.494 "recv_buf_size": 2097152, 00:20:37.494 "send_buf_size": 2097152, 00:20:37.494 "enable_recv_pipe": true, 00:20:37.494 "enable_quickack": false, 00:20:37.494 "enable_placement_id": 0, 00:20:37.494 "enable_zerocopy_send_server": true, 00:20:37.494 "enable_zerocopy_send_client": false, 00:20:37.494 "zerocopy_threshold": 0, 00:20:37.494 "tls_version": 0, 00:20:37.494 "enable_ktls": false 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "sock_impl_set_options", 00:20:37.494 "params": { 00:20:37.494 "impl_name": "uring", 00:20:37.494 "recv_buf_size": 2097152, 00:20:37.494 "send_buf_size": 2097152, 00:20:37.494 "enable_recv_pipe": true, 00:20:37.494 "enable_quickack": false, 00:20:37.494 "enable_placement_id": 0, 00:20:37.494 "enable_zerocopy_send_server": false, 00:20:37.494 "enable_zerocopy_send_client": false, 00:20:37.494 "zerocopy_threshold": 0, 00:20:37.494 "tls_version": 0, 00:20:37.494 "enable_ktls": false 00:20:37.494 } 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "subsystem": "vmd", 00:20:37.494 "config": [] 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "subsystem": "accel", 00:20:37.494 "config": [ 00:20:37.494 { 00:20:37.494 "method": "accel_set_options", 00:20:37.494 "params": { 00:20:37.494 "small_cache_size": 128, 00:20:37.494 "large_cache_size": 16, 00:20:37.494 "task_count": 2048, 00:20:37.494 "sequence_count": 2048, 00:20:37.494 "buf_count": 2048 00:20:37.494 } 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "subsystem": "bdev", 00:20:37.494 "config": [ 00:20:37.494 { 00:20:37.494 "method": "bdev_set_options", 00:20:37.494 "params": { 00:20:37.494 "bdev_io_pool_size": 65535, 00:20:37.494 "bdev_io_cache_size": 256, 00:20:37.494 "bdev_auto_examine": true, 00:20:37.494 "iobuf_small_cache_size": 128, 00:20:37.494 "iobuf_large_cache_size": 16 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "bdev_raid_set_options", 00:20:37.494 "params": { 00:20:37.494 "process_window_size_kb": 1024, 00:20:37.494 "process_max_bandwidth_mb_sec": 0 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "bdev_iscsi_set_options", 00:20:37.494 "params": { 00:20:37.494 "timeout_sec": 30 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "method": "bdev_nvme_set_options", 00:20:37.494 "params": { 00:20:37.494 "action_on_timeout": "none", 00:20:37.494 "timeout_us": 0, 00:20:37.494 "timeout_admin_us": 0, 00:20:37.494 "keep_alive_timeout_ms": 10000, 00:20:37.494 "arbitration_burst": 0, 00:20:37.494 "low_priority_weight": 0, 00:20:37.494 "medium_priority_weight": 0, 00:20:37.494 "high_priority_weight": 0, 00:20:37.494 "nvme_adminq_poll_period_us": 10000, 00:20:37.494 "nvme_ioq_poll_period_us": 0, 00:20:37.494 "io_queue_requests": 512, 00:20:37.494 "delay_cmd_submit": true, 00:20:37.494 "transport_retry_count": 4, 00:20:37.494 "bdev_retry_count": 3, 00:20:37.494 "transport_ack_timeout": 0, 00:20:37.494 "ctrlr_loss_timeout_sec": 0, 00:20:37.494 "reconnect_delay_sec": 0, 00:20:37.494 "fast_io_fail_timeout_sec": 0, 00:20:37.494 "disable_auto_failback": false, 00:20:37.494 "generate_uuids": false, 00:20:37.494 "transport_tos": 0, 00:20:37.494 "nvme_error_stat": false, 00:20:37.494 "rdma_srq_size": 0, 00:20:37.494 "io_path_stat": false, 00:20:37.494 "allow_accel_sequence": false, 00:20:37.494 "rdma_max_cq_size": 0, 00:20:37.494 "rdma_cm_event_timeout_ms": 0, 00:20:37.494 "dhchap_digests": [ 00:20:37.494 "sha256", 00:20:37.494 "sha384", 00:20:37.494 "sha512" 00:20:37.494 ], 00:20:37.494 "dhchap_dhgroups": [ 00:20:37.494 "null", 00:20:37.494 "ffdhe2048", 00:20:37.494 "ffdhe3072", 00:20:37.494 "ffdhe4096", 00:20:37.495 "ffdhe6144", 00:20:37.495 "ffdhe8192" 00:20:37.495 ] 00:20:37.495 } 00:20:37.495 }, 00:20:37.495 { 00:20:37.495 "method": "bdev_nvme_attach_controller", 00:20:37.495 "params": { 00:20:37.495 "name": "nvme0", 00:20:37.495 "trtype": "TCP", 00:20:37.495 "adrfam": "IPv4", 00:20:37.495 "traddr": "127.0.0.1", 00:20:37.495 "trsvcid": "4420", 00:20:37.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.495 "prchk_reftag": false, 00:20:37.495 "prchk_guard": false, 00:20:37.495 "ctrlr_loss_timeout_sec": 0, 00:20:37.495 "reconnect_delay_sec": 0, 00:20:37.495 "fast_io_fail_timeout_sec": 0, 00:20:37.495 "psk": "key0", 00:20:37.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.495 "hdgst": false, 00:20:37.495 "ddgst": false, 00:20:37.495 "multipath": "multipath" 00:20:37.495 } 00:20:37.495 }, 00:20:37.495 { 00:20:37.495 "method": "bdev_nvme_set_hotplug", 00:20:37.495 "params": { 00:20:37.495 "period_us": 100000, 00:20:37.495 "enable": false 00:20:37.495 } 00:20:37.495 }, 00:20:37.495 { 00:20:37.495 "method": "bdev_wait_for_examine" 00:20:37.495 } 00:20:37.495 ] 00:20:37.495 }, 00:20:37.495 { 00:20:37.495 "subsystem": "nbd", 00:20:37.495 "config": [] 00:20:37.495 } 00:20:37.495 ] 00:20:37.495 }' 00:20:37.495 13:30:26 keyring_file -- keyring/file.sh@115 -- # killprocess 85020 00:20:37.495 13:30:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85020 ']' 00:20:37.495 13:30:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85020 00:20:37.495 13:30:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:37.495 13:30:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.495 13:30:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85020 00:20:37.754 killing process with pid 85020 00:20:37.754 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.754 00:20:37.754 Latency(us) 00:20:37.754 [2024-11-17T13:30:26.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.755 [2024-11-17T13:30:26.979Z] =================================================================================================================== 00:20:37.755 [2024-11-17T13:30:26.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85020' 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@973 -- # kill 85020 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@978 -- # wait 85020 00:20:37.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:37.755 13:30:26 keyring_file -- keyring/file.sh@118 -- # bperfpid=85270 00:20:37.755 13:30:26 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85270 /var/tmp/bperf.sock 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85270 ']' 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.755 13:30:26 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.755 13:30:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:37.755 13:30:26 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:20:37.755 "subsystems": [ 00:20:37.755 { 00:20:37.755 "subsystem": "keyring", 00:20:37.755 "config": [ 00:20:37.755 { 00:20:37.755 "method": "keyring_file_add_key", 00:20:37.755 "params": { 00:20:37.755 "name": "key0", 00:20:37.755 "path": "/tmp/tmp.1kpi53yBPX" 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "keyring_file_add_key", 00:20:37.755 "params": { 00:20:37.755 "name": "key1", 00:20:37.755 "path": "/tmp/tmp.d2ZWChrD1K" 00:20:37.755 } 00:20:37.755 } 00:20:37.755 ] 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "subsystem": "iobuf", 00:20:37.755 "config": [ 00:20:37.755 { 00:20:37.755 "method": "iobuf_set_options", 00:20:37.755 "params": { 00:20:37.755 "small_pool_count": 8192, 00:20:37.755 "large_pool_count": 1024, 00:20:37.755 "small_bufsize": 8192, 00:20:37.755 "large_bufsize": 135168, 00:20:37.755 "enable_numa": false 00:20:37.755 } 00:20:37.755 } 00:20:37.755 ] 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "subsystem": "sock", 00:20:37.755 "config": [ 00:20:37.755 { 00:20:37.755 "method": "sock_set_default_impl", 00:20:37.755 "params": { 00:20:37.755 "impl_name": "uring" 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "sock_impl_set_options", 00:20:37.755 "params": { 00:20:37.755 "impl_name": "ssl", 00:20:37.755 "recv_buf_size": 4096, 00:20:37.755 "send_buf_size": 4096, 00:20:37.755 "enable_recv_pipe": true, 00:20:37.755 "enable_quickack": false, 00:20:37.755 "enable_placement_id": 0, 00:20:37.755 "enable_zerocopy_send_server": true, 00:20:37.755 "enable_zerocopy_send_client": false, 00:20:37.755 "zerocopy_threshold": 0, 00:20:37.755 "tls_version": 0, 00:20:37.755 "enable_ktls": false 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "sock_impl_set_options", 00:20:37.755 "params": { 00:20:37.755 "impl_name": "posix", 00:20:37.755 "recv_buf_size": 2097152, 00:20:37.755 "send_buf_size": 2097152, 00:20:37.755 "enable_recv_pipe": true, 00:20:37.755 "enable_quickack": false, 00:20:37.755 "enable_placement_id": 0, 00:20:37.755 "enable_zerocopy_send_server": true, 00:20:37.755 "enable_zerocopy_send_client": false, 00:20:37.755 "zerocopy_threshold": 0, 00:20:37.755 "tls_version": 0, 00:20:37.755 "enable_ktls": false 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "sock_impl_set_options", 00:20:37.755 "params": { 00:20:37.755 "impl_name": "uring", 00:20:37.755 "recv_buf_size": 2097152, 00:20:37.755 "send_buf_size": 2097152, 00:20:37.755 "enable_recv_pipe": true, 00:20:37.755 "enable_quickack": false, 00:20:37.755 "enable_placement_id": 0, 00:20:37.755 "enable_zerocopy_send_server": false, 00:20:37.755 "enable_zerocopy_send_client": false, 00:20:37.755 "zerocopy_threshold": 0, 00:20:37.755 "tls_version": 0, 00:20:37.755 "enable_ktls": false 00:20:37.755 } 00:20:37.755 } 00:20:37.755 ] 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "subsystem": "vmd", 00:20:37.755 "config": [] 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "subsystem": "accel", 00:20:37.755 "config": [ 00:20:37.755 { 00:20:37.755 "method": "accel_set_options", 00:20:37.755 "params": { 00:20:37.755 "small_cache_size": 128, 00:20:37.755 "large_cache_size": 16, 00:20:37.755 "task_count": 2048, 00:20:37.755 "sequence_count": 2048, 00:20:37.755 "buf_count": 2048 00:20:37.755 } 00:20:37.755 } 00:20:37.755 ] 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "subsystem": "bdev", 00:20:37.755 "config": [ 00:20:37.755 { 00:20:37.755 "method": "bdev_set_options", 00:20:37.755 "params": { 00:20:37.755 "bdev_io_pool_size": 65535, 00:20:37.755 "bdev_io_cache_size": 256, 00:20:37.755 "bdev_auto_examine": true, 00:20:37.755 "iobuf_small_cache_size": 128, 00:20:37.755 "iobuf_large_cache_size": 16 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "bdev_raid_set_options", 00:20:37.755 "params": { 00:20:37.755 "process_window_size_kb": 1024, 00:20:37.755 "process_max_bandwidth_mb_sec": 0 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "bdev_iscsi_set_options", 00:20:37.755 "params": { 00:20:37.755 "timeout_sec": 30 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "bdev_nvme_set_options", 00:20:37.755 "params": { 00:20:37.755 "action_on_timeout": "none", 00:20:37.755 "timeout_us": 0, 00:20:37.755 "timeout_admin_us": 0, 00:20:37.755 "keep_alive_timeout_ms": 10000, 00:20:37.755 "arbitration_burst": 0, 00:20:37.755 "low_priority_weight": 0, 00:20:37.755 "medium_priority_weight": 0, 00:20:37.755 "high_priority_weight": 0, 00:20:37.755 "nvme_adminq_poll_period_us": 10000, 00:20:37.755 "nvme_ioq_poll_period_us": 0, 00:20:37.755 "io_queue_requests": 512, 00:20:37.755 "delay_cmd_submit": true, 00:20:37.755 "transport_retry_count": 4, 00:20:37.755 "bdev_retry_count": 3, 00:20:37.755 "transport_ack_timeout": 0, 00:20:37.755 "ctrlr_loss_timeout_sec": 0, 00:20:37.755 "reconnect_delay_sec": 0, 00:20:37.755 "fast_io_fail_timeout_sec": 0, 00:20:37.755 "disable_auto_failback": false, 00:20:37.755 "generate_uuids": false, 00:20:37.755 "transport_tos": 0, 00:20:37.755 "nvme_error_stat": false, 00:20:37.755 "rdma_srq_size": 0, 00:20:37.755 "io_path_stat": false, 00:20:37.755 "allow_accel_sequence": false, 00:20:37.755 "rdma_max_cq_size": 0, 00:20:37.755 "rdma_cm_event_timeout_ms": 0, 00:20:37.755 "dhchap_digests": [ 00:20:37.755 "sha256", 00:20:37.755 "sha384", 00:20:37.755 "sha512" 00:20:37.755 ], 00:20:37.755 "dhchap_dhgroups": [ 00:20:37.755 "null", 00:20:37.755 "ffdhe2048", 00:20:37.755 "ffdhe3072", 00:20:37.755 "ffdhe4096", 00:20:37.755 "ffdhe6144", 00:20:37.755 "ffdhe8192" 00:20:37.755 ] 00:20:37.755 } 00:20:37.755 }, 00:20:37.755 { 00:20:37.755 "method": "bdev_nvme_attach_controller", 00:20:37.755 "params": { 00:20:37.755 "name": "nvme0", 00:20:37.755 "trtype": "TCP", 00:20:37.755 "adrfam": "IPv4", 00:20:37.755 "traddr": "127.0.0.1", 00:20:37.755 "trsvcid": "4420", 00:20:37.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.755 "prchk_reftag": false, 00:20:37.755 "prchk_guard": false, 00:20:37.756 "ctrlr_loss_timeout_sec": 0, 00:20:37.756 "reconnect_delay_sec": 0, 00:20:37.756 "fast_io_fail_timeout_sec": 0, 00:20:37.756 "psk": "key0", 00:20:37.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.756 "hdgst": false, 00:20:37.756 "ddgst": false, 00:20:37.756 "multipath": "multipath" 00:20:37.756 } 00:20:37.756 }, 00:20:37.756 { 00:20:37.756 "method": "bdev_nvme_set_hotplug", 00:20:37.756 "params": { 00:20:37.756 "period_us": 100000, 00:20:37.756 "enable": false 00:20:37.756 } 00:20:37.756 }, 00:20:37.756 { 00:20:37.756 "method": "bdev_wait_for_examine" 00:20:37.756 } 00:20:37.756 ] 00:20:37.756 }, 00:20:37.756 { 00:20:37.756 "subsystem": "nbd", 00:20:37.756 "config": [] 00:20:37.756 } 00:20:37.756 ] 00:20:37.756 }' 00:20:37.756 [2024-11-17 13:30:26.941906] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:37.756 [2024-11-17 13:30:26.942001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85270 ] 00:20:38.014 [2024-11-17 13:30:27.081448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.014 [2024-11-17 13:30:27.122557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.273 [2024-11-17 13:30:27.253389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.273 [2024-11-17 13:30:27.309546] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.841 13:30:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.841 13:30:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:38.841 13:30:27 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:20:38.841 13:30:27 keyring_file -- keyring/file.sh@121 -- # jq length 00:20:38.841 13:30:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:39.100 13:30:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:39.100 13:30:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:20:39.100 13:30:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:39.100 13:30:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:39.100 13:30:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:39.100 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:39.100 13:30:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:39.359 13:30:28 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:20:39.359 13:30:28 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:20:39.359 13:30:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:39.359 13:30:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:39.359 13:30:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:39.359 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:39.359 13:30:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:39.618 13:30:28 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:20:39.618 13:30:28 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:20:39.618 13:30:28 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:20:39.618 13:30:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:39.878 13:30:28 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:20:39.878 13:30:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:39.878 13:30:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1kpi53yBPX /tmp/tmp.d2ZWChrD1K 00:20:39.878 13:30:28 keyring_file -- keyring/file.sh@20 -- # killprocess 85270 00:20:39.878 13:30:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85270 ']' 00:20:39.878 13:30:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85270 00:20:39.878 13:30:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:39.878 13:30:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.878 13:30:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85270 00:20:39.878 13:30:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.878 killing process with pid 85270 00:20:39.878 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.878 00:20:39.878 Latency(us) 00:20:39.878 [2024-11-17T13:30:29.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.878 [2024-11-17T13:30:29.102Z] =================================================================================================================== 00:20:39.878 [2024-11-17T13:30:29.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.878 13:30:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.878 13:30:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85270' 00:20:39.878 13:30:29 keyring_file -- common/autotest_common.sh@973 -- # kill 85270 00:20:39.878 13:30:29 keyring_file -- common/autotest_common.sh@978 -- # wait 85270 00:20:40.139 13:30:29 keyring_file -- keyring/file.sh@21 -- # killprocess 85010 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85010 ']' 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85010 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85010 00:20:40.139 killing process with pid 85010 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85010' 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@973 -- # kill 85010 00:20:40.139 13:30:29 keyring_file -- common/autotest_common.sh@978 -- # wait 85010 00:20:40.706 00:20:40.706 real 0m15.588s 00:20:40.706 user 0m38.698s 00:20:40.706 sys 0m3.109s 00:20:40.706 13:30:29 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.706 13:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:40.706 ************************************ 00:20:40.706 END TEST keyring_file 00:20:40.706 ************************************ 00:20:40.706 13:30:29 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:20:40.706 13:30:29 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:40.706 13:30:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.706 13:30:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.706 13:30:29 -- common/autotest_common.sh@10 -- # set +x 00:20:40.706 ************************************ 00:20:40.706 START TEST keyring_linux 00:20:40.706 ************************************ 00:20:40.706 13:30:29 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:40.706 Joined session keyring: 1001098880 00:20:40.706 * Looking for test storage... 00:20:40.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:40.706 13:30:29 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.706 13:30:29 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.706 13:30:29 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.966 13:30:29 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@345 -- # : 1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.966 13:30:29 keyring_linux -- scripts/common.sh@368 -- # return 0 00:20:40.966 13:30:29 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.966 13:30:29 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.966 --rc genhtml_branch_coverage=1 00:20:40.966 --rc genhtml_function_coverage=1 00:20:40.966 --rc genhtml_legend=1 00:20:40.966 --rc geninfo_all_blocks=1 00:20:40.966 --rc geninfo_unexecuted_blocks=1 00:20:40.966 00:20:40.966 ' 00:20:40.966 13:30:29 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.966 --rc genhtml_branch_coverage=1 00:20:40.966 --rc genhtml_function_coverage=1 00:20:40.966 --rc genhtml_legend=1 00:20:40.966 --rc geninfo_all_blocks=1 00:20:40.966 --rc geninfo_unexecuted_blocks=1 00:20:40.966 00:20:40.966 ' 00:20:40.966 13:30:30 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.966 --rc genhtml_branch_coverage=1 00:20:40.966 --rc genhtml_function_coverage=1 00:20:40.966 --rc genhtml_legend=1 00:20:40.966 --rc geninfo_all_blocks=1 00:20:40.966 --rc geninfo_unexecuted_blocks=1 00:20:40.966 00:20:40.966 ' 00:20:40.966 13:30:30 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.966 --rc genhtml_branch_coverage=1 00:20:40.966 --rc genhtml_function_coverage=1 00:20:40.966 --rc genhtml_legend=1 00:20:40.966 --rc geninfo_all_blocks=1 00:20:40.966 --rc geninfo_unexecuted_blocks=1 00:20:40.966 00:20:40.966 ' 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c87b64e3-aa64-4edb-937d-9804b9d918ba 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=c87b64e3-aa64-4edb-937d-9804b9d918ba 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.966 13:30:30 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.966 13:30:30 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.966 13:30:30 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.966 13:30:30 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.966 13:30:30 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.966 13:30:30 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.966 13:30:30 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.966 13:30:30 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:40.966 13:30:30 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.966 13:30:30 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:40.966 13:30:30 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:40.966 13:30:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:40.967 /tmp/:spdk-test:key0 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:40.967 13:30:30 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:40.967 13:30:30 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:40.967 /tmp/:spdk-test:key1 00:20:40.967 13:30:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:40.967 13:30:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85396 00:20:40.967 13:30:30 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:40.967 13:30:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85396 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85396 ']' 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.967 13:30:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:41.226 [2024-11-17 13:30:30.202021] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:41.226 [2024-11-17 13:30:30.202302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85396 ] 00:20:41.226 [2024-11-17 13:30:30.341241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.226 [2024-11-17 13:30:30.393387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.485 [2024-11-17 13:30:30.482397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:42.053 [2024-11-17 13:30:31.137017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.053 null0 00:20:42.053 [2024-11-17 13:30:31.168992] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.053 [2024-11-17 13:30:31.169193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:42.053 624030118 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:42.053 183073600 00:20:42.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85410 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:42.053 13:30:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85410 /var/tmp/bperf.sock 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85410 ']' 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.053 13:30:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:42.053 [2024-11-17 13:30:31.251570] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:20:42.053 [2024-11-17 13:30:31.251851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85410 ] 00:20:42.312 [2024-11-17 13:30:31.403576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.312 [2024-11-17 13:30:31.461970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.248 13:30:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.248 13:30:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:43.248 13:30:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:43.248 13:30:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:43.248 13:30:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:43.248 13:30:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:43.507 [2024-11-17 13:30:32.633852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.507 13:30:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:43.507 13:30:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:43.765 [2024-11-17 13:30:32.942495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.024 nvme0n1 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:44.024 13:30:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:44.024 13:30:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:44.024 13:30:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:44.024 13:30:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:44.283 13:30:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@25 -- # sn=624030118 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 624030118 == \6\2\4\0\3\0\1\1\8 ]] 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 624030118 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:44.542 13:30:33 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.542 Running I/O for 1 seconds... 00:20:45.479 13905.00 IOPS, 54.32 MiB/s 00:20:45.479 Latency(us) 00:20:45.479 [2024-11-17T13:30:34.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:45.479 nvme0n1 : 1.01 13903.93 54.31 0.00 0.00 9158.77 3187.43 12094.37 00:20:45.479 [2024-11-17T13:30:34.703Z] =================================================================================================================== 00:20:45.479 [2024-11-17T13:30:34.703Z] Total : 13903.93 54.31 0.00 0.00 9158.77 3187.43 12094.37 00:20:45.479 { 00:20:45.479 "results": [ 00:20:45.479 { 00:20:45.479 "job": "nvme0n1", 00:20:45.479 "core_mask": "0x2", 00:20:45.479 "workload": "randread", 00:20:45.479 "status": "finished", 00:20:45.479 "queue_depth": 128, 00:20:45.479 "io_size": 4096, 00:20:45.479 "runtime": 1.009355, 00:20:45.479 "iops": 13903.928746575784, 00:20:45.479 "mibps": 54.31222166631166, 00:20:45.479 "io_failed": 0, 00:20:45.479 "io_timeout": 0, 00:20:45.479 "avg_latency_us": 9158.766271004184, 00:20:45.479 "min_latency_us": 3187.4327272727273, 00:20:45.479 "max_latency_us": 12094.370909090909 00:20:45.479 } 00:20:45.479 ], 00:20:45.479 "core_count": 1 00:20:45.479 } 00:20:45.479 13:30:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:45.479 13:30:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:45.738 13:30:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:45.738 13:30:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:45.738 13:30:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:45.738 13:30:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:45.738 13:30:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:45.738 13:30:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:45.997 13:30:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:45.997 13:30:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:45.997 13:30:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:45.997 13:30:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.997 13:30:35 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:45.997 13:30:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:46.271 [2024-11-17 13:30:35.412468] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.271 [2024-11-17 13:30:35.412668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e75d0 (107): Transport endpoint is not connected 00:20:46.271 [2024-11-17 13:30:35.413660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e75d0 (9): Bad file descriptor 00:20:46.271 [2024-11-17 13:30:35.414657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:46.271 [2024-11-17 13:30:35.414914] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:46.271 [2024-11-17 13:30:35.415038] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:46.271 [2024-11-17 13:30:35.415183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:46.271 request: 00:20:46.271 { 00:20:46.271 "name": "nvme0", 00:20:46.271 "trtype": "tcp", 00:20:46.271 "traddr": "127.0.0.1", 00:20:46.271 "adrfam": "ipv4", 00:20:46.271 "trsvcid": "4420", 00:20:46.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.271 "prchk_reftag": false, 00:20:46.271 "prchk_guard": false, 00:20:46.271 "hdgst": false, 00:20:46.271 "ddgst": false, 00:20:46.271 "psk": ":spdk-test:key1", 00:20:46.271 "allow_unrecognized_csi": false, 00:20:46.271 "method": "bdev_nvme_attach_controller", 00:20:46.271 "req_id": 1 00:20:46.271 } 00:20:46.271 Got JSON-RPC error response 00:20:46.271 response: 00:20:46.271 { 00:20:46.271 "code": -5, 00:20:46.271 "message": "Input/output error" 00:20:46.271 } 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@33 -- # sn=624030118 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 624030118 00:20:46.271 1 links removed 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@33 -- # sn=183073600 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 183073600 00:20:46.271 1 links removed 00:20:46.271 13:30:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85410 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85410 ']' 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85410 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.271 13:30:35 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85410 00:20:46.553 killing process with pid 85410 00:20:46.553 Received shutdown signal, test time was about 1.000000 seconds 00:20:46.553 00:20:46.553 Latency(us) 00:20:46.553 [2024-11-17T13:30:35.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.553 [2024-11-17T13:30:35.777Z] =================================================================================================================== 00:20:46.553 [2024-11-17T13:30:35.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85410' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@973 -- # kill 85410 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@978 -- # wait 85410 00:20:46.553 13:30:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85396 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85396 ']' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85396 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85396 00:20:46.553 killing process with pid 85396 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85396' 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@973 -- # kill 85396 00:20:46.553 13:30:35 keyring_linux -- common/autotest_common.sh@978 -- # wait 85396 00:20:47.126 00:20:47.126 real 0m6.371s 00:20:47.126 user 0m11.880s 00:20:47.126 sys 0m1.687s 00:20:47.126 13:30:36 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.126 ************************************ 00:20:47.126 END TEST keyring_linux 00:20:47.126 ************************************ 00:20:47.126 13:30:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:47.126 13:30:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:47.126 13:30:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:47.126 13:30:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:47.126 13:30:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:47.126 13:30:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:47.126 13:30:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:47.126 13:30:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:47.126 13:30:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.126 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:20:47.126 13:30:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:47.126 13:30:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:47.126 13:30:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:47.126 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:20:49.030 INFO: APP EXITING 00:20:49.030 INFO: killing all VMs 00:20:49.030 INFO: killing vhost app 00:20:49.030 INFO: EXIT DONE 00:20:49.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:49.966 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:49.966 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:50.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:50.533 Cleaning 00:20:50.533 Removing: /var/run/dpdk/spdk0/config 00:20:50.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:50.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:50.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:50.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:50.533 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:50.533 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:50.533 Removing: /var/run/dpdk/spdk1/config 00:20:50.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:50.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:50.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:50.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:50.533 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:50.533 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:50.792 Removing: /var/run/dpdk/spdk2/config 00:20:50.792 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:50.792 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:50.792 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:50.792 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:50.792 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:50.792 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:50.792 Removing: /var/run/dpdk/spdk3/config 00:20:50.792 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:50.792 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:50.792 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:50.792 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:50.792 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:50.792 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:50.792 Removing: /var/run/dpdk/spdk4/config 00:20:50.792 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:50.792 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:50.792 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:50.792 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:50.792 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:50.792 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:50.792 Removing: /dev/shm/nvmf_trace.0 00:20:50.792 Removing: /dev/shm/spdk_tgt_trace.pid56796 00:20:50.792 Removing: /var/run/dpdk/spdk0 00:20:50.792 Removing: /var/run/dpdk/spdk1 00:20:50.792 Removing: /var/run/dpdk/spdk2 00:20:50.792 Removing: /var/run/dpdk/spdk3 00:20:50.792 Removing: /var/run/dpdk/spdk4 00:20:50.792 Removing: /var/run/dpdk/spdk_pid56638 00:20:50.792 Removing: /var/run/dpdk/spdk_pid56796 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57000 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57081 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57101 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57211 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57221 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57361 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57556 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57709 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57783 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57859 00:20:50.792 Removing: /var/run/dpdk/spdk_pid57957 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58030 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58069 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58099 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58168 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58249 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58688 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58738 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58788 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58792 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58859 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58873 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58940 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58949 00:20:50.792 Removing: /var/run/dpdk/spdk_pid58994 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59005 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59050 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59068 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59199 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59234 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59311 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59651 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59663 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59698 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59713 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59728 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59747 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59761 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59782 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59801 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59814 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59830 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59849 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59868 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59882 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59902 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59916 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59937 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59955 00:20:50.792 Removing: /var/run/dpdk/spdk_pid59964 00:20:51.052 Removing: /var/run/dpdk/spdk_pid59985 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60010 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60029 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60064 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60125 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60159 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60163 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60197 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60205 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60214 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60261 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60270 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60304 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60308 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60323 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60327 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60342 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60346 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60361 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60365 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60399 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60422 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60437 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60466 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60475 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60483 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60523 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60535 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60561 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60574 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60576 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60589 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60591 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60605 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60607 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60620 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60699 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60746 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60864 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60900 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60945 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60965 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60980 00:20:51.052 Removing: /var/run/dpdk/spdk_pid60996 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61033 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61049 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61128 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61151 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61195 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61263 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61319 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61352 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61446 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61494 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61527 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61753 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61856 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61885 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61914 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61948 00:20:51.052 Removing: /var/run/dpdk/spdk_pid61981 00:20:51.052 Removing: /var/run/dpdk/spdk_pid62020 00:20:51.052 Removing: /var/run/dpdk/spdk_pid62052 00:20:51.052 Removing: /var/run/dpdk/spdk_pid62441 00:20:51.052 Removing: /var/run/dpdk/spdk_pid62480 00:20:51.052 Removing: /var/run/dpdk/spdk_pid62823 00:20:51.052 Removing: /var/run/dpdk/spdk_pid63287 00:20:51.052 Removing: /var/run/dpdk/spdk_pid63571 00:20:51.052 Removing: /var/run/dpdk/spdk_pid64427 00:20:51.052 Removing: /var/run/dpdk/spdk_pid65339 00:20:51.052 Removing: /var/run/dpdk/spdk_pid65456 00:20:51.052 Removing: /var/run/dpdk/spdk_pid65529 00:20:51.052 Removing: /var/run/dpdk/spdk_pid66942 00:20:51.052 Removing: /var/run/dpdk/spdk_pid67257 00:20:51.052 Removing: /var/run/dpdk/spdk_pid70814 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71160 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71273 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71401 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71430 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71451 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71472 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71564 00:20:51.052 Removing: /var/run/dpdk/spdk_pid71692 00:20:51.312 Removing: /var/run/dpdk/spdk_pid71843 00:20:51.312 Removing: /var/run/dpdk/spdk_pid71926 00:20:51.312 Removing: /var/run/dpdk/spdk_pid72107 00:20:51.312 Removing: /var/run/dpdk/spdk_pid72190 00:20:51.312 Removing: /var/run/dpdk/spdk_pid72275 00:20:51.312 Removing: /var/run/dpdk/spdk_pid72633 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73046 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73047 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73048 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73310 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73567 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73950 00:20:51.312 Removing: /var/run/dpdk/spdk_pid73958 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74275 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74295 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74309 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74334 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74350 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74697 00:20:51.312 Removing: /var/run/dpdk/spdk_pid74750 00:20:51.312 Removing: /var/run/dpdk/spdk_pid75074 00:20:51.312 Removing: /var/run/dpdk/spdk_pid75277 00:20:51.312 Removing: /var/run/dpdk/spdk_pid75702 00:20:51.312 Removing: /var/run/dpdk/spdk_pid76243 00:20:51.312 Removing: /var/run/dpdk/spdk_pid77099 00:20:51.312 Removing: /var/run/dpdk/spdk_pid77737 00:20:51.312 Removing: /var/run/dpdk/spdk_pid77740 00:20:51.312 Removing: /var/run/dpdk/spdk_pid79737 00:20:51.312 Removing: /var/run/dpdk/spdk_pid79786 00:20:51.312 Removing: /var/run/dpdk/spdk_pid79852 00:20:51.312 Removing: /var/run/dpdk/spdk_pid79900 00:20:51.312 Removing: /var/run/dpdk/spdk_pid80000 00:20:51.312 Removing: /var/run/dpdk/spdk_pid80062 00:20:51.312 Removing: /var/run/dpdk/spdk_pid80122 00:20:51.312 Removing: /var/run/dpdk/spdk_pid80178 00:20:51.312 Removing: /var/run/dpdk/spdk_pid80540 00:20:51.312 Removing: /var/run/dpdk/spdk_pid81775 00:20:51.312 Removing: /var/run/dpdk/spdk_pid81912 00:20:51.312 Removing: /var/run/dpdk/spdk_pid82146 00:20:51.312 Removing: /var/run/dpdk/spdk_pid82738 00:20:51.312 Removing: /var/run/dpdk/spdk_pid82906 00:20:51.312 Removing: /var/run/dpdk/spdk_pid83064 00:20:51.312 Removing: /var/run/dpdk/spdk_pid83161 00:20:51.312 Removing: /var/run/dpdk/spdk_pid83326 00:20:51.312 Removing: /var/run/dpdk/spdk_pid83435 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84140 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84181 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84215 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84470 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84501 00:20:51.312 Removing: /var/run/dpdk/spdk_pid84535 00:20:51.312 Removing: /var/run/dpdk/spdk_pid85010 00:20:51.312 Removing: /var/run/dpdk/spdk_pid85020 00:20:51.312 Removing: /var/run/dpdk/spdk_pid85270 00:20:51.312 Removing: /var/run/dpdk/spdk_pid85396 00:20:51.312 Removing: /var/run/dpdk/spdk_pid85410 00:20:51.312 Clean 00:20:51.312 13:30:40 -- common/autotest_common.sh@1453 -- # return 0 00:20:51.312 13:30:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:51.312 13:30:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.312 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:51.571 13:30:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:51.571 13:30:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.571 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:51.571 13:30:40 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:51.571 13:30:40 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:51.571 13:30:40 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:51.571 13:30:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:51.571 13:30:40 -- spdk/autotest.sh@398 -- # hostname 00:20:51.571 13:30:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:51.830 geninfo: WARNING: invalid characters removed from testname! 00:21:13.760 13:31:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:17.047 13:31:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:19.582 13:31:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.128 13:31:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:24.662 13:31:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:27.195 13:31:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:29.098 13:31:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:29.098 13:31:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:29.098 13:31:18 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:29.098 13:31:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:29.098 13:31:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:29.098 13:31:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:29.357 + [[ -n 5367 ]] 00:21:29.357 + sudo kill 5367 00:21:29.367 [Pipeline] } 00:21:29.386 [Pipeline] // timeout 00:21:29.392 [Pipeline] } 00:21:29.409 [Pipeline] // stage 00:21:29.414 [Pipeline] } 00:21:29.431 [Pipeline] // catchError 00:21:29.442 [Pipeline] stage 00:21:29.445 [Pipeline] { (Stop VM) 00:21:29.461 [Pipeline] sh 00:21:29.745 + vagrant halt 00:21:32.276 ==> default: Halting domain... 00:21:38.858 [Pipeline] sh 00:21:39.139 + vagrant destroy -f 00:21:41.690 ==> default: Removing domain... 00:21:41.962 [Pipeline] sh 00:21:42.265 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:42.328 [Pipeline] } 00:21:42.343 [Pipeline] // stage 00:21:42.348 [Pipeline] } 00:21:42.362 [Pipeline] // dir 00:21:42.367 [Pipeline] } 00:21:42.381 [Pipeline] // wrap 00:21:42.388 [Pipeline] } 00:21:42.400 [Pipeline] // catchError 00:21:42.410 [Pipeline] stage 00:21:42.413 [Pipeline] { (Epilogue) 00:21:42.426 [Pipeline] sh 00:21:42.707 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:47.991 [Pipeline] catchError 00:21:47.993 [Pipeline] { 00:21:48.006 [Pipeline] sh 00:21:48.307 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:48.307 Artifacts sizes are good 00:21:48.316 [Pipeline] } 00:21:48.330 [Pipeline] // catchError 00:21:48.342 [Pipeline] archiveArtifacts 00:21:48.349 Archiving artifacts 00:21:48.476 [Pipeline] cleanWs 00:21:48.487 [WS-CLEANUP] Deleting project workspace... 00:21:48.487 [WS-CLEANUP] Deferred wipeout is used... 00:21:48.494 [WS-CLEANUP] done 00:21:48.496 [Pipeline] } 00:21:48.511 [Pipeline] // stage 00:21:48.517 [Pipeline] } 00:21:48.532 [Pipeline] // node 00:21:48.538 [Pipeline] End of Pipeline 00:21:48.591 Finished: SUCCESS